Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.

Can You Automate OER Evaluation With The RISE Framework?

The RISE Framework is a learning analytics methodology for identifying OER resources in a course that may need improvement. On one level, this is an interesting development, since so few learning analytics projects are actually getting into how to improve the actual education of learners. But on the other hand, I am not sure if this framework has a detailed enough understanding of instructional design, either. A few key points seem to be missing. It’s still early, so we will see.

The basic idea of the RISE Framework is that analytics will create a graph that plots page clicks in OER resources on the x-axis, and grades on assessments on the y-axis. This will create a grid that shows where there were higher than average grades with higher than average clicks, higher than average grades with lower than average clicks, lower than average grades with higher than average clicks, and lower than average grades with lower than average clicks. This is meant to identify the resources that teachers should consider examining for improvement (especially focusing on the ones that got a high number of clicks but lower grade scores). Note that this is not meant to definitely say “this is where there is problem, so fix it now” but more ” there may or may not be a problem here, so check it out.” Keep that in mind while I explore some of my doubts here, because I would be a lot harsher on this if it was presented as a tool to definitely point out exact problems rather than what it is: a way to start the search for problems.

Of course, any system of comparing grades with clicks itself is problematic on many fronts, and the creators of the RISE Framework do take this into consideration when spelling out what each of the four quadrants could mean. For example, in the quadrant that specifies high grades with low content usage, they not only identify “high content quality” as the cause of this, but also “high prior knowledge,” “poorly written assessment,” and so on. So this is good – many factors outside of grades and usage are taken into account. This is because, on the grade front, we know that scores are a reflection of a massive number of factors – the quality of the content being only one of those (and not always the biggest one). As  noted, prior knowledge can affect grades (sometimes negatively – not always positively like the RISE framework appears to assume). Exhaustion or boredom or anxiety can impact grades. Again, I am glad that these are in the framework, but the affect these have on grades is assumed in one direction – rather than the complex directions they take in real life. For example, students that game the test or rubric can inflate scores without using the content much – even on well-designed assessments (I did that all of the time in college).

However, the bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores. Anyone that has analyzed assessment data can tell you that the final score on a test is actually an aggregate of many smaller items (test questions). That aggregate grade can mask many deficiencies at the micro level. That is why instructors prefer to analyze individual test questions or rubric lines than the aggregate scores of the entire test. Assessments could cover, say 45 questions of content that were well covered in the resources, and then 5 questions that are poorly covered. But the high scores on the 45 questions, combined with the fact that many will get some questions right by random guessing on the other 5, could result in test scores that mask a massive problem with those 5 questions. But teachers can most likely figure that out quickly without the RISE framework, and I will get to that later.

The other concern is with clicks on the OER. Well, they say that you can measure “pageviews, time spent, or content page ratings”… but those first two are still clicks, and the last one is a bit too dependent on the happiness of the raters (students) at any given moment to really be that quantitative. I wouldn’t outright discount it as a factor, but I will state that you are always going to find a close alignment with the test scores on that one for many reasons. In other words, it is a pre-biased factor – students that get a high score will probably rate the content as effective even if it wasn’t, and students that get a low score will probably blame the content quality whether it was really a factor or not.

Also, now that students know their clicks are being recorded, they are more and more often clicking around to make sure they get good numbers on those data points. I even do that when taking MOOCs, just in case: click through the content at a realistic pace even if I am really doing something else other than reading. People have learned to skim resources while checking their phone, clicking through at a pace that makes it seem like they are reading closely. Most researchers are very wary of using click data like pageviews or time spent to tell anything other than where students clicked, how long between clicks, and what was clicked on. Guessing what those mean beyond that? More and more, that is being discouraged in research (and for good reason).

Of course, I don’t have time to go into how relying on only content and assessment is poor way to teach a course, but I think we all know that. A robust and helpful learning community in a class can answer learning questions and help learners overcome bad resources to get good grades. And I am not referring to cheating here – Q&A forums in courses can often really help some learners understand bad readings – while also possibly making them feel like they are the problem, not the content.

Still, all of that is somewhat or directly addressed in the framework, and because it is a guide rather than definitive answer, variations like those discussed above are to be expected. I covered them just to make sure I was covering all critical bases.

The biggest concern I have with the RISE framework really comes here: “The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate.”

Well, since that doesn’t happen in many courses due to time constraints, that eliminates large chunks of courses. I can also tell you as an instructional designer, many people think they have well-aligned outcomes…. but don’t.

But, let’s assume that you do have a course with “content and assessment items have been explicitly aligned with learning outcomes.” If you have explicitly aligned assessments, you don’t need the RISE framework. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number. Most testing systems today will give you an item-by-item breakdown of each assessment (because teachers have been asking for it). Any low course score on any specific question indicates some problem. At that point, it is best (and quickest) to just ask your learners:

  1. Did the question make sense? Was it well written?
  2. Did it connect to the content?
  3. Did the content itself make sense?

Plus, most content hosting systems have ways to track page clicks, so you can easily make your own matrix using clicks if you need to. The matrix in the framework might give you a good way to organize the data to see where your problem lies…. but to be honest, I think it would be quicker and more accurate to focus on the assessment questions instead of the whole test, and ask the learners about specific questions.

Also, explicit alignment can itself hide problems with the content. An explicit alignment would require that you test what is in the content, even if the content is bad. This is one of the many things you learn as an ID: don’t test what students don’t learn; write your test questions to match the content no matter what. A decently-aligned assessment can still produce grades from a very bad content source. One of my ID professors once told me something along the lines of “a good instructional designer can help students pass even with bad textbooks; a bad instructional designer can help them fail with the best textbook.”

Look – instructional designers have been dealing with good and bad textbooks for decades now. Same goes for instructors that serve as their own IDs. We have many ways to work around those.

I may be getting the RISE framework wrong, but comparing overall scores on assessments to certain click-stream activity in OER (sometimes an entire book) comes across like shooting fish in a barrel with a shotgun approach. Especially when well-aligned test questions can pinpoint specific sources of problems at a fairly micro-fine level.

Now then, if you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something. Then, if you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant:

We see that you missed Question 4, but you did spend a good amount of time on page 25 of the book, where this question was taken from. Would you say that:

  • The text on page 25 was not well-written
  • Question 4 was not well-written
  • The text on page 25 doesn’t really match Question 4
  • I visited page 25, but did not spend the full time there reading the text

Of course, writing it out this ways sounds creepy. You would have to make sure that learners opt-in for this after fully understanding that this is what would happen, and then you would probably need to make sure that the responses go to someone that is not directly responsible for their grade to be analyzed anonymously. Then report those results in a generic way: “survey results identified that there is probably not a good alignment between page 25 and question 4, so please review both to see if that is the case.”

In the end, though, I am not sure if you can get detailed enough to make this framework effective without diving deep into surveillance monitoring. Maybe putting the learner in control of these tools, and give them the option of sharing the results with their instructor if they feel comfortable?

But, to be honest, I am probably not in the target audience for this tool. My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

“Creating Online Learning Experiences” Book is Now Available as an OER

Well, big news in the EduGeek Journal world. I have been heading up a team of people to work on new book that was released as an OER through PressBooks today:

Creating Online Learning Experiences: A Brief Guide to Online Courses, from Small and Private to Massive and Open

Book Description: The goal of this book is to provide an updated look at many of the issues that comprise the online learning experience creation process. As online learning evolves, the lines and distinctions between the various classifications of courses has blurred and often vanished. Classic elements of instructional design remain relevant at the same time that newer concepts of learning experience are growing in importance. However, problematic issues new and old still have to be addressed. This book aims to be a handbook that explores as many of these issues and concepts as possible for new and experienced designers alike, whether creating traditional online courses, open learning experiences, or anything in between.

We have been working on this book on and off for three or more years now, so I am glad to finally get it out to the world. In addition to me, there were several great contributing writers: Brett Benham, Justin Dellinger, Amber Patterson, Peggy Semingson, Catherine Spann, Brittany Usman, and Harriet Watkins.

Also, on top of that, we recruited a great group of reviewers that dug through various parts and gave all kinds of helpful suggestions and edits: Maha Al-Freih, Maha Bali, Autumm Caines, Justin Dellinger, Chris Gilliard, Rebecca Heiser, Rebecca Hogue, Whitney Kilgore, Michelle Reed, Katerina Riviou, Sarah Saraj, George Siemens, Brittany Usman, and Harriet Watkins.

Still skeptical? How about an outline of topics, most of which we did try to filter through a critical lens to some degree:

  1. Overview of Online Courses
  2. Basic Philosophies
  3. Institutional Courses
  4. Production Timelines and Processes
  5. Effective Practices
  6. Creating Effective Course Activities
  7. Creating Effective Course Content
  8. Open Educational Resources
  9. Assessment and Grading Issues
  10. Creating Quality Videos
  11. Utilizing Social Learning in Online Courses
  12. Mindfulness in Online Courses
  13. Advanced Course Design
  14. Marketing of an Online Course

So, please download and read the book here if you like: Creating Online Learning Experiences

There is also a blog post from UTA libraries about the release: Libraries Launch Authoring Platform, Publish First OER

And if you don’t like something you read, or find something that is wrong, or think of something that should have been added – let me know! I would love to see an expanded second edition with more reviewers and contributing authors. There were so many more people I wanted to ask to contribute, but I just ran out of time. I intentionally avoided the “one author/one chapter” structure so that you can add as much or as little as you like.

Working With Resistant Faculty as an Instructional Designer

One of the questions I get most often from people new to instructional design is how to work with faculty that are resistant to making changes to their course ideas (or maybe even resistant to working with an instructional designer all together). To be honest, once you have gotten to the place of resistance, you can generally find all kinds of advice for dealing with disagreements that will work. There really isn’t anything special or different about your interactions with people that you don’t see eye to eye with even when you are an instructional designer.

However, I have found that there are ways to start off the working relationship in an instructional design context that can set a tone for collaboration or disagreement on down the line. There are a few things that I try to do with the first contact with faculty to get off on the right foot. So my biggest piece of advice is always to set up the relationship the right way from the beginning, and then you should have a smoother relationship with faculty to begin with (and if disagreements arise, a good foundation to work towards agreement).

The first thing I tell people to do is to get your view of the faculty members in the right context in your mind. Of course, this includes setting aside an pre-conceived notions you might have gained about them from other people – but it is more than that. I try to keep in mind what the work flow of the faculty actually looks like, especially how they are very busy. Not necessarily more or less busy than you are, but just on an entirely different context of busy. They are having to deal with hundreds of student emails, and then all kinds of research-related emails, and then all kinds of department-related issues, and so on. When you send them that initial contact email, you can probably guarantee that it will be filed away until there is a lull in their work flow – later that day, later that week, or even later that month. That filing system might be anything from a folder system in outlook to a paper notebook next to their computer (I have seen it all). But the key thing is that they are likely to put it aside without a whole lot of thought about it at first.

This is an important factor to remember. Some faculty might respond right away, but others will file and get back to you once the dozen or so urgent requests in front of them are taken care of. At this point, while you are waiting for a response, don’t make things more complex by having other people contacting them as well. Many instructional design groups will do this differently: the manager will contact the faculty to “introduce” the ID, then if there is no response from the faculty after a few days, the ID will then email again… possibly introducing more team members as they do so. By the time the faculty gets to that lull to respond, they have all these people contacting them and they have to figure out if they are all working on the same project, or different people working on similar projects. Then they have to figure out who specifically to reply to, who was just adding extra information to the discussion, and so on.

And right there is a place where you can start to get off on the wrong foot with faculty. Instead of responding to one person, they have to take extra time to read through these emails from different people to figure out what is going on. Again, some will be fine with that, but others will feel that you and your department are “piling it on” to try and pressure them to respond faster.

So, for the sake of focus, make sure to only have one person contacting the faculty member or members until they respond. If you need to send multiple emails to follow up and nudge the faculty, respond to your last message so those that use threaded email system will just end up with one email thread rather than several. Since the goal of having the first meeting is usually to set up a first meeting, you can make sure that the other people they need to meet are at the first meeting.  And if at all possible, wait to bring those people into the conversation at the first meeting. If you really have to bring them in earlier, then at least wait until after the faculty has first responded to the initial emails.

Quite often, a manager or other person like to make the first email to connect the ID and faculty, and then step out of the picture. If you can avoid that, I would. If the faculty doesn’t respond right away, then the manager will have to nudge. If the ID nudges, it introduces that complexity that I have found best to avoid at this stage. So if you are a manager, get used to letting your people do the initial contact. If you are an ID, get used to doing the initial contact. It just saves time and avoid miscommunication down the line.

Remember: that first response from faculty is usually the signal that they have the open head space to deal with course design – or that they are at least ready to free up some head space for the design. So feel free to nudge them if needed, but don’t add anything else to that nudge beyond your initial “let’s meet” message.

Also, I should mention this “let’s meet” message. Be careful how you phrase that request. So many people jump out of the gate with suggestions, like “we can meet once a month” or “once a week” or some other frequency based on what they think the course needs. And they are probably right about this suggestion. But remember that the faculty you are meeting with have already possibly thought about how many meetings they need with you as well. They may be flexible, but they also may have a specific need for meetings. If you come out right away and suggest a specific schedule, you may stress them out by not suggesting enough meetings compared to what they want, or maybe by suggesting more meetings than they thought they needed.

Of course, you might get lucky and suggest the exact frequency they were thinking of, the heavens will open, collaboration glitter will float down, and every one rejoices.

But you might also set up a foundation of frustration if you get it wrong. My suggestion? I always like to say that I want to “discuss a method and frequency for consistent communication to keep the course design process moving forward” or something to that effect. When you say something like this, what ever method or frequency they were thinking of will fit into that description, and they will feel like you are there to help their course, not impose deadlines.

Which, of course, you usually are… but you don’t want to default to that position from the beginning.

However, make sure you don’t jump out first with “how about meeting twice a week” or some other specific suggestion. From this point on in interacting with faculty, always lead with questions intending to draw out what the faculty thinks. I have found that leading with questions is a good way to collaborate more than disagree. Don’t just say “well, what we need to instead….” But also, don’t beat around the bush, either. Just ask them directly: how often do you want to meet, and in what context?

Of course, there is a good chance they will suggest something that is more often or less often than you thought, or they will suggest face-to-face meetings when you thought email would work, and so on. When this happens, try to find out (by asking questions) why they want their suggested frequency instead of going into “correction” mode.

  • “That seems to be a high frequency of meetings, and you are pretty experienced in online course design. How are you feeling about working on this specific course?”
  • “Do you think you will be able to meet the deadlines for the course design? Would it maybe help to have more frequent check-ins with me to meet deadlines?”
  • “I know you are used to face-to-face meetings with our organization. How do you feel about email check-ins? We could possibly meet less frequently if you think it will work for you to email me questions as needed.”

A quick note: multiple meetings per week is probably going to send the wrong message to faculty. They usually have multiple meetings only with students that are struggling the most in their class, or with colleagues that can’t stay on track when working on research projects. There is kind of this stigma against being asked to meet multiple times per week in many academic circles. Don’t be against that if they are the ones that say they need it, but don’t be the one to suggest it first. Not all faculty think this way, but I have learned the hard way to not be the one to bring it up with the ones that do have a preconceived notion about it.

So, really, from this point out, I would say if you stick to asking questions first rather than jumping into correction mode, and then follow other methods and guidelines for dealing with workplace conflict or disagreements, you will know how to deal with most situations. By taking into account how you start off the working relationship with faculty, you are getting started on a better foundation for future interactions. There is a lot more that I could cover, but this post is getting too long. If you have any suggestions for dealing with resistant faculty, let me know in the comments – there is still a lot I can learn in this area as well!

After the Cambridge Analytica Debacle, Is It Time to Ban Psychographics?

What are psychographics you may ask? Well, you may not, but if so: the simple definition is that they are a companion to demographics, but they try to figure out what those demographics tell us about the person behind the demographic factors. This article looks at some of the features that could go into psychographics, like figuring out if a person is “concerned with health and appearance” or “wants a healthy lifestyle, but doesn’t have much time” or what ever the factor may be. This article was written in 2013, long before the Cambridge Analytica debacle with Facebook. That entire debacle event should have people asking harder questions of Ed-Tech, such as:

Audrey Watters will surely be writing about her question and more soon (its a huge topic to write about already), and Autumm Caines has already weighed in on her experiences investigating Cambridge Analytica long before most of us were aware of them. Like many people, I had to dig up some refreshers about what psychographics are after Audrey Watters’ tweet to make sure I was thinking of the right concept. And now I want to question the whole concept of psychographics altogether. Maybe “ban” is too strong of a word, maybe not. You can be the judge.

Even in the fairly “innocent” tone of the 2013 article I linked above, there are still concerning aspects of psychographics shining through: interview your customers with the agenda of profiling them, and maybe consider telling them what you are doing if they are cool enough; you can’t trust what they say all the time, but you can trust their actions; and “people’s true motivations are revealed by the actions they take”

But really, are they? Don’t we all do things that we know we shouldn’t sometimes, just like we sometimes say things we know we don’t believe sometimes? Isn’t the whole idea of self-regulation based on us being able to overcome our true motivations and do things we know we need, even if we aren’t truly motivated?

The whole basis of psychographics in this article is that you can trust the actions more than the words. I’m not so sure that is true, or really even a “problem” per se. We are all human. We are inconsistent. We change our mind. We don’t do what we say should, or do things that we say we shouldn’t at times. It is part of being alive – that makes life interesting and frustrating. It’s not a bug in the system to be fixed by trickery.

(Side note: anyone that really digs into psychographics will tell you that it is more complex than it was in 2013, but I don’t really have a stomach to go any more complex than that.)

So is it really fair and accurate to do this kind of profiling on people? At certain levels, I get that companies need to understand their customers. But they already have focus groups and test cases and even comment cards to gather this data from. If they don’t think they are getting accurate enough information from those sources, why would they think they could get even more accurate information from trickier methods? Either way, all words and actions come from the same human brain.

Look at the example of what to do with psychographics in marketing in the 2013 article. That whole section is all about tricking a person to buy a product, via some pretty emotionally manipulative methods. I mean, the article flat out tells readers to use a customer’s kids to sell things to them: “Did she love the one about the smiley-face veggie platters for an after-school snack? Give her more ways to help keep her kids eating well.”

Really?

What about just giving her the options of what you sell and what they are for, and let her decide what she needs?

And what if she starts showing some signs of self-destructive behavior? If the psychographics are run by AI… will it recognize that and back off? Or will it only see those behaviors as signs of what to sell, and start pushing items to that person that they don’t need? Do you think this hasn’t already happened?

Maybe I am way off base comparing psychographics to profiling and emotional manipulation. I don’t think I am. But if there is even a chance that I am not off base, what should our reaction be as a society? I don’t want to go overboard and even go so far as to get rid of customer surveys and feedback forms. If a company gives me a form designed in a way that lets me tell them what kind of ads I want to see, I wouldn’t mind that. Well, in the past I wouldn’t have minded. After Cambridge Analytica, I would want believable assurances that they would stick with what I put in the form and not try to extrapolate between the lines. I would want assurance they aren’t doing… well… anything that Cambridge Analytica did. [Insert long list of ethical violations here]

But would most companies self-regulate and stay within ethical limits with all that data dangling in front of them? Ummmmm…. not so sure. We may need to consider legislating ethical limits on these activities, as well as outright banning others that prove too tempting. And then figure out how to keep the government in-line on these issues as well. Just because Cambridge Analytica and Facebook are in the hot-seat this week, that doesn’t mean some government department or agency won’t be in that same seat tomorrow.

Hybrid MOOCs and Dual-Layer/Self-Mapped Learning Pathways MOOCs: My Perspective on the Differences

A recent tweet from Aras Bozkurt highlights a question we often get about the work we do with dual-layer/self-mapped learning pathways courses (most often in MOOCs, but also starting to bleed over into traditional courses as well):

As soon as we started using the term “dual-layer MOOC” in 2014, people pointed out the similarities between that idea and “Hybrid MOOCs.” These are important points because they do share many concepts. However, there were some key differences as well. In my mind at least, there are some differences that exist along various continuums rather than hard divisions into two distinct ideas.

The original distinction between layers into “instructivist layer” and “connectivist layer” proved to be problematic, as many courses have aspects of both, and learners tend to mix both at different times (if given the choice) instead of choosing one or the other. So I think it is better to look at the distinction as one that focuses on who does most of the decisions of what to mix together in the course. If most of the decisions to mix together/hybridize the course content and activities lies with the instructor, I tend to look at those as “Hybrid MOOCs” because it is the MOOC itself that is a hybrid. Even if there are choices (“write a paper or create a blog post or Tweet a thread”) and some of those are connectivist in nature, if those choices are more restricted and designed into the course, I see it more as a Hybrid MOOC. If the learner is more in control of those choices and how they mix the hybrid layers together, I see it more as the dual-layer concept we tried with DALMOOC. Of course, the layer idea focuses on the design too much, so that is why I like to refer to those courses now as “self-mapped learning pathways” because the focus should be on the pathway that the learner maps instead of the layers.

This is a continuum, of course – with a completely instructor controlled course on one side (all possible activities, even social/connectivist ones, chosen by the instructor) and a completely learner-driven course (like RhizoMOOC) on the other end. The DALMOOC and HumanMOOC courses I worked with/co-taught lean heavily towards the learner driven side, for example, while YogaMOOC leaned slightly more towards the instructor-driven side. All of those mix elements of xMOOCs and cMOOCs together in different ways (with RhizoMOOC most likely technically existing off the spectrum because it was all community driven – but it makes a good frame of reference. In contrast, typical xMOOCs exist off the other side of the spectrum because they are all instructor controlled and usually not that complex).

Additionally, I think an important dimension to look at with these courses is one that would exist on a perpendicular axis that measures the complexity with which the course organizes or scaffolds the choice for learners. For example, courses like DALMOOC were highly organized and complex – with maps of course structure, activity banks, course metaphors for descriptions of what that structure looks like, etc. Other courses like the EngageMOOC were less complex in that aspect of the structure, with the linear content in place – but learners were told they could do various other activities as they liked. There was some structure there as well, so it was not as far down that continuum as the RhizoMOOC would be.

So you would probably end up with a grid like this for explaining where courses fell on these continuums – some courses would probably shift from place to place as the course progresses:

Note: there was no scientific method for where I placed the example courses above – I just took a guess where they seemed to fall by my estimation. Feel free to disagree. The basic idea is that courses that mix various epistemologies tend to exist more on a continuum than at defined poles. Hybrid MOOCs are what I see as courses that lean towards the instructor deciding what this mixture is, and/or what the specific choices for the mixing are. Dual-layer/learning pathways courses are those that lean towards the learner deciding what this mixture is, and/or what the specific choices for mixing are. Either type can do so in more complex or less complex ways depending on the needs of the course.

Engaging Your Local Community Online: The Overlooked Hard Work of #EngageMOOC

“What does polarization currently look like in YOUR workplace, or campus, or community…online and off? What resources are you turning to in order to try to deal with it? Is there anything you are currently engaged with that you can share with us?” These questions from the last week of #EngageMOOC are a bit difficult for me to answer. When most people read these, they probably think of things like block walking, or soup kitchens, or community groups, or things that are in our physical communities around us.

I certainly find those things important. My whole family climbed in a car to travel in the pouring rain to the meeting of local chapter of a political party in the new town we just moved to temporarily… only to find it canceled due to rain. What a bunch of snowflakes!

(it was actually pretty heavy and we should have known better ourselves)

Our attempts to get connected with people in our area have been a bit of a bust, as we just miss finding out about activities the day after or they get rained out. However, even once we find those activities, they will still be events for a specific political party. Polarization in our area currently looks like everyone doing political stuff with those that they agree with, and then not talking about political issues the rest of the time to avoid arguments.

Oh, sure – you ask any Republican if they know any Democrats and they will respond with “I have plenty of liberal friends!” or vice versa for Democrats. This will usually be followed by some statement that indicates they really don’t understand the other side.

A few weeks ago I saw our local HOA representative raving all over Facebook about “silly liberals.” I decided to message him about his activities, how public they were, and how they may make the few liberals in our community feel. Nothing accusatory, just asking him to consider their viewpoint. It was not a hostile conversation through DM, but he was pretty assured there was no harm in his words. Mostly just “liberals do it too” and “I have lots of liberal friends are okay with it” and so on. I don’t really think I got anywhere with him.

He is now leading a grassroots “community task force” to take a look at security at our community schools – and he has been clear he wants to push for armed teachers like neighboring school districts already have.

You see, the “arming teachers debate” is not theoretical to us in Texas. We already have schools that have armed teachers for years now (many of the “staff” that are armed there are teachers). This is the school district next to ours. People in my child’s school district are now asking “why can’t we have armed teachers like Argyle ISD?” People in Argyle ISD are also not content to just keep it there:

“I see a future where schools will be lumped into two categories. Gun free zones and ones that are not.”

“Argyle ISD and the Chief have done exactly what is needed to protect against the evils and evil people of this world!”

“Where Argyle is now, and where they started, and where they are headed is the future of safety in our world. They are not following, they and leading by example and showing everyone what must be done to protect our children at school.”

“Arming teachers is safety – they will not shoot without reason! Grow up people!!! Welcome to the millennial generation!!!”

To be honest, there really isn’t much I can do to change these people’s minds. But I have gotten through to some through debates on Facebook.

Yes, I said debates on Facebook.

Look, I know I am not going to change the world by debating on Facebook. I know that it is not for everyone. But so many people are so rarely exposed to ideas outside of their comfort zone – that silently reading a debate on Facebook might be the only time they are exposed to opposing viewpoints. You see, I bring up different points not to win the argument, but to expose the larger number of those reading the posts to different viewpoints.

Of course, I am not talking about arguing with “that uncle” on my private Facebook wall. I go to local newspaper and community groups and pages to bring up different views for consideration – from pro-vaccination to stricter gun regulation to transgender bathroom access to Black Lives Matters. Yeah, its not exactly what anyone would call “fun.” Usually it goes nowhere. But then there is that random DM from someone that tells me I have changed their mind on something. So I know it is getting through in some ways to some people, even though they might not let me know every time.

Look, if my strongly pro-Trump cousin can suddenly come out and post a rant on Facebook about how he is tired of Trump and will no longer vote Republican until they clean up their act… and he is quoting some ideas that I know I posted earlier… you know that I or someone else he is following on Facebook are getting through to him. We can’t just write these people off as extreme viewpoints that will never change. I get that it is hard work to get through to people, especially in online environments. It is not for everyone. But if that is something you feel you can do (and I wouldn’t recommend doing it constantly – I frequently will just get off social media for days at a time to recover from debates)… don’t feel bad for doing it. Don’t feel like your part is “less than” or “not as hard.” We need people to engage with different viewpoints, especially those where we are standing on issue of equality or safety that should be the baseline middle point (but has been labeled as “polarized” by others).

Getting Lost in the Four Moves of #EngageMOOC

This week we are looking at what to do about polarization and fake news in EngageMOOC. Our assignment this week was to look at Mike Caulfield’s Four Moves and use it to evaluate a web source. The Four Moves idea is a response to what Mike sees as the inadequacies of other information literacy checklists like CRAAP. Admittedly, these checklists do get long and cumbersome. For many people, this is not a problem. For others, it is. But in the end, my concern is that neither one will help with polarization.

So I am going through the Four Moves idea with common arguments that  I often see getting polarized online. To be honest, I really like the Four Moves idea… under certain conditions. I have not read through the longer book that is linked in the post above, so maybe all of this is addressed in there. For now, I will just focus on the blog post. The first step of the Four Moves process (which is not a check list… even though it technically is :) ) starts off with this:

Check for previous work. Most stories you see on the web have been either covered, verified, or debunked by more reputable sources. Find a reputable source that has done your work for you. If you can find that, maybe your work is done.

So this is great when dealing with a really simple new piece of news, like the example given of “Jennifer Lawrence died.” But the problem quickly becomes: what counts as a “reputable” source? Things like the CRAAP method are supposed to be about helping people determine what is reputable, so I am a bit confused as how the Four Moves would replace CRAAP when it technically starts after CRAAP is finished (yeah, I am giggling at that too). In today’s polarized climate, people look to very bad websites like Brietbart, The Blaze, and dozens of other extreme left and right organizations as “reputable.” Millions see these websites as “a reputable source that done your work for you”… even though they aren’t. Then there is the idea of being “debunked.” Of course someone that is anti-vaccination could look at Mercola as “reputable”… but that has been debunked, right? Yes, it has. But then the anti-vaxxers debunked that debunkation (is that a word?). Then the pro-vaccination side debunked that debunkination… and it has been going back and forth for a long time. Years. Decades. There are so many competing debunkinations that it is impossible to keep up with at times. The problem is, everything from the flat earth theory to the alt right to the anti-vaccination movement to the anti-gun control crowd have created an extensive network of websites that cite their own network of research, debunkinators, and reliable/credible sources. The problem is no longer “is this a reputable source” but “who do you say the reputable sites are out of all the competing ecosystems of so-called reputable sources”?

Go upstream to the source. If you can’t find a rock-solid source that has done your verification and context-building for you, follow the story or claim you are looking at to it’s origin. Most stories shared with you on the web are recoverage of some other reporting or research. Follow the links and get to the source. If you recognize the source as credible, your work may be done.

This flows from the same problem as the one above – going back to the source on most of the issues that polarize us will just end up at competing websites that all claim credibility and research. Even if you pull out Snopes or Politifact or Wikipedia, the response will often be “oh, those are leftist sites and I want something unbiased like Fox News.”

Read laterally. If you have traced the claim or story or research to the source and you don’t recognize it, you will need check the credibility of the source by looking at available information on its reliability, expertise, and agenda.

Looking at available information on reliability, expertise, and agenda is technically part of CRAAP… but again, some people see all of this through different lenses. When I look at Mercola’s website, I see an obvious agenda from people without expertise and lacking in reliability. But the anti-vaxxers sees a website that is full of reliability and expertise, with “no agenda but the truth.” The things is, if you see a new article questioning the safety of the flu vaccine, you can go through each of these steps and end up on Mercola and deem the flu vaccine as deadly.

Circle back. A reminder that even when we follow this process sometimes we find ourselves going down dead ends. If a certain route of inquiry is not panning out, try going back to the beginning with what you know now. Choose different search terms and try again.

Selecting different search terms on Google will pretty much give you similar results, because Google looks past those terms and gives you what it thinks you want based on past searches. Of course, using CRAAP you wouldn’t make that mistake… but that doesn’t automatically make CRAPP better.

(hopefully you are giggling as much as I am every time I use CRAAP. Oh wait…)

So the thing is, I really like Four Moves in place of CRAAP and other methods… when dealing with someone that would have the same version of “reliable” and “credible” that I do. And I am sure that someone with a very extreme conservative outlook on life would say the same thing… and would not trust me because of my views on what sites are “reliable” (that is actually not hypothetical – my name was released on the “list of worst pro-vaccination trolls” years ago because I have butted heads with so many anti-vaxxers online through the years). Polarization will continue as long as we can’t deal with the core issue that the different sides have a fundamentally different understanding of what counts as “credible, reliable sources.”

Losing a Friend in Times of Polarization: an #engageMOOC Side Thought

We have probably all experienced either ourselves being defriended on Facebook over something, or seeing others cut off contact with each other due to disagreements. Losing friends like that is definitely difficult due to the evolving constraints of social media, but I am referring to a different kind of loss here.

I met my friend Jeff in college, but we connected better later due to spending a lot of time hanging out during the after-college years. My wife actually knew him before I did. We moved away and somewhat lost touch. However, we would reconnect and catch up as much as we could. When we all got on social media, Jeff and I would connect more often and discuss life as well as our favorite topics: music and/or religion. Our views evolved away from the evangelical bubble we had been stuck in during college. Or, to be more honest, none of us felt the need to try to pretend to fit in with a label that really didn’t fit in the first place.

Jeff was really more vocal about becoming a liberal. This cost him a lot of friends from our college days (but I also lost many of those friends as well). Jeff would get frustrated with the way he was treated and would shut down his social media accounts every so often. After a couple of months, he would pop back up with either a new account or new name and start asking me about music. Sometimes he found me, others I would go looking for him. This was his pattern for the last few years until it changed at the end of 2017. He shut everything down in early November and didn’t come back. So in January of 2018 I decided to do some digging to see where he had popped back up.

All I found was his obituary from mid-November.

What really enraged me about this was that I found out about it so much later. I was still connected with some of his friends from his hometown, but none of them bother to contact us and tell us. he had passed. Additionally, no one from our our college/post college circles seemed to even know he had passed away. We had all become so polarized that we had failed the basics of human decency: let people know when their friends have died.

Jeff had lived a hard life. He was a black child that was adopted by white parents in a small rural town in east Texas. Our mutual friends from that town would have known he passed away, because they all knew Jeff. Jeff often talked about not knowing anything about his birth culture growing up and only discovering it at Baylor University (and even then, he recognized it was a bit skewed there). After getting out on his own, he struggled with discovering he had mental disabilities. He changed his faith to agnostic and his political views to “true” liberal (what most people call neo-liberalism today). He explored different sexualities. All of this caused him to be ostracized by his friends, his old church family, and most people in his home town. My wife and I were the few that stuck with him, because we don’t have conservative views on any of those aspects of life.

But here was his obituary, ignoring all of that, and speaking of all of his activities at our old church. They used that time to describe him, but didn’t bother to tell any of us from that time of his life that he had passed away.

It was all about illusion. As a small town, they had to present the adopted son of a prominent bank manager as a “good Christian boy,” while making sure no one showed up to share any stories that might destroy that facade:

“I really haven’t talked to him since he went so radically liberal on Facebook.”

You see, his Facebook account was completely deleted after he passed away. He shut it down on November 6th. He died from a heart attack in his sleep on the 13th. His posts were deleted a few weeks later. I had thought it was him that deleted them right after Thanksgiving (I noticed his funny comments vanished one day in the “On This Day” section I am addicted to reading every day). Now I know it couldn’t have been him.

One of his Twitter accounts was also deleted. His other one? Still up. I don’t think they ever knew about it. If they did, it would probably be gone. If for anything, just to remove the profile picture he took of himself sticking his finger up his nose at conservatives. That was just Jeff’s sense of humor.

Of course, he was the one that was told he was polarizing others by speaking up for Black Lives Matters, progressive Christianity (and later agnosticism), and systemic injustices against those with mental disabilities. People cut him off for being “divisive.”

That is my biggest concern with the conversation of polarization today: what counts as the “norm” that people are “polarizing” away from? If people were being polarized over the size of the government, or socialism vs capitalism, or some other purely political issue… that is one thing. But when one person is fighting for equality for all, and the other is fighting against it because they think the status quo is just fine…. what can you do? Why is equality a pole to be polarized to, rather than the norm in the middle?

Sorry that I can’t fix that one Jeff. Also sorry that I never convinced you to like King’s X. You won me over on Rush, though – so you won that debate in the end. I guess I had hoped that some day we could actually record our parody of “Staying Alive” that mocks charismatic church culture. But maybe it is for the better that the world is forever spared from “Speaking in Tongues”: “Well, you can tell by the way I speak in tongues, I’m a Holy Ghost guy, no time for talk….”

Vygotsky vs Spivak: Sociocultural Theory and Subalterns in #EngageMOOC

To be honest, I am not sure if I am convinced if the world has become more polarized, or if we are just becoming more aware of how divided we already were. If you go back and look at ideas like sociocultural history, there certainly is ground work for the idea that we are all different. But one thing is sure: we need to improve where we are regardless of whether we just got here or have always been here all along and just didn’t know it.

My interest in sociocultural theory came about in an Advanced Instructional Design course, where we had to take some educational theory and argue 1) why it was an instructional design theory, and 2) why it counted as an advanced one. There are different flavors of Vygotsky’s sociocultural theory out there, but the way that I look at it that will suffice for this post is that we all belong to various sociocultural groupings that are constantly changing and affecting who we are and how we learn. These groupings can be anything from physical characteristics to employment status to educational study topic to even where we are currently eating a meal. The first set of videos in EngageMOOC touch on many different ways to look at some important sociocultural groupings, for example.

Because we are all slightly different socioculturally, and who we are socioculurally is in constant flux, making something like education into an unchanging constant becomes counter-intuitive to who we are as the human race. But those unchanging constants are what most theories look to codify.

Was I successful in defending sociocultural theory as an advanced instructional design theory? You can read the paper to judge for yourself (“Sociocultural Theory as an Advanced Instructional Design Method: Examining the Application, Possibilities, and Limitations”), but our instructor also admitted to us that there really is no such thing as “advanced” instructional design theories. The Master’s Degree program had an “instructional design” course, so the Ph.D. program was given an “advanced ID course… just because.

Not too long after that, I became aware of the work of Gayatri Chakravorty Spivak, and especially her most well-known work Can the Subaltern Speak?  (good one paragraph summary here, or full text here). The basic problem she was addressing was how many post-colonialists were trying to help the untouachables in India, but were speaking for them instead of having them speak for themselves. Additionally, there was also the assumption that all of these groups had one collective opinion on any topic, thus erasing individual differences.

What does this have to do with our current polarization? We seem to throw around solutions like “just listening to the other side” or “just respond in kindness”…. but to those of us that have tried those methods, we find they rarely work. I have responded with kindness. I have responded with heated debate. I have responded with seeking to understand. Sometimes good dialogue is the result, but most times they keep arguing.

However, I have taken note of what starts many fights online. There is usually a provocative thought about the opposite side thrown out by a person, typically containing vast misunderstandings and outright hyperbole about the “other.” This enrages those “others,” who jump in and start swinging. For example, you will rarely see a fight start over someone saying “I am pro-life because I want to see all babies born.” You more typically see enragement ensue after some statement like “I am pro-life, which is so much better than you evil liberals that just delight in killing babies like your leader Killary does in her secret pizza basement ceremonies!”

Obviously, those of the liberal viewpoint take offense at this. But do we ask why they would get offended? I mean beyond the obvious reason that these statements are not true, and cast them in the most evil light. They know people think that way about them already – so why is it different when they see a FB comment from an acquaintance saying so?

I would submit that they feel their ability to speak for themselves has been violated by being cast into the wrong sociocultural grouping, based on assumptions from someone that didn’t even bother to ask what they think in their own voice.

They didn’t let the subaltern speak for themselves.

Spivak spoke about how subalterns can be anyone that is in a position of less power and control in a given situation, and not just the untouchables of India. In education, our students are typically subalterns to the instructors. In online conflicts, those that propose some wild misunderstanding of the “other” tend to quickly jump into the seat of power in those encounters, setting those they unfairly characterize into subaltern roles because of the language they utilize to tear them down.

So, of course, part of the task is getting everyone to realize that we all have unique sociocultural characteristics, and therefore we need to be allowed to speak for ourselves rather than have our beliefs dictated to us. But on the other side, when someone has attempted to erase our own voice in a situation, we should try to realize that it is okay to feel upset by that. It is okay to get “butthurt,” no matter what someone says. It is okay to push back. It is okay to ignore it. It is okay to respond in kindness, and it is okay to be angry. We are all unique people. We can all react uniquely. There is no roadmap.

But I would also suggest that we all need to learn from how we react, to make sure we don’t turn around and make others feel the same way.

Too many times, it seems like our solutions to “fake news” involves finding ways to get rid of anger. That will never happen. Other methods seem to point fingers at every time people get things wrong online. That will never end, because the first time people stamped letters into clay tablets was the first time people misunderstood something and wrote about it. People misunderstand – we always have, we always will.

None of this is easy. There will be no finish lines to cross to say “we fixed fake news!” or “we finally unpolarized everything!” It’s a process. You and I can only be our own unique part in it all.