So You Want to Go Online: OPMs vs In-House Development

As the Great OPM Controversy continues to rage, a lot is being said about developing online courses “in-house” (by hiring people to do the work rather than paying a company to do so). This is actually an area that I have a lot of experience in at various levels, so I wanted to address the pros and cons of developing in-house capacity for offering online programs. I have been out of the the direct instructional design business for a few years, so I will be a bit rusty here and there. Please feel free to comment if I miss anything or leave out something important. However, I still want to take a rough stab at a ballpark list of what needs consideration. First, I want to start with three given points:

  1. Everything I say here is assuming high-quality online courses, not just PowerPoints and Lecture Capture plopped online. But on the other hand, this is also assuming there won’t be any extra expenses like learning games or chat-bots or other expensive toys… errr… tools.
  2. In most OPM models, universities and colleges still have to supply the teachers, so that cost won’t be dealt with here, either. But make sure you are accounting for teacher pay (hopefully full time teachers more than adjuncts, and not just adding extra courses to faculty with already over-full loads).
  3. All of these issues I discuss are within the mindset of “scaling” the programs eventually to some degree or another, but I will get to the problems with scale later.

So the first thing to address is infrastructure, and I know there are a wide range of capacities here. Most universities and colleges have IT staff and support staff for things like email and campus computers. If you have that, you can hopefully build off of that. If you don’t…. well, the OPM model might be the better route for you as you are so far behind that you have to catch up with society, not just online learning. But I know most places are not in this boat. Some even already have technology and support in place for online courses – so you can just skip this part and talk directly with those people about their ability to support another program.

You also have to think about the support of technology, usually the LMS and possibly other software. If you have this in place, check to make sure the existing tools have capacity to take on more (they usually have some). If you have an IT department – talk with them about what it would take to add an LMS and any other tools (like data analysis tools) you would like to add. If you are talking one online program, you probably don’t need even one full time position to support what you need initially. That means you can make this a win/win for IT by helping them get that extra position for the ____ they have been wanting for a while if they can also share that position with online learning technology support part-time.

This is, of course, for a self-hosted LMS. All of the LMS providers out there will offer to host for you, and even provide support. It does cost, but shop around and realize there are vendors that will give you good service for a good price. But there are also some that won’t deal with you at all if you are not bringing a large numbers of courses online initially, so be careful there.

Then there is support for students and teachers. Again, this is something you can bundle from most LMS providers, or contract individually from various companies. If you already have student and faculty tech support of some kind on campus, talk with them to see what it would take to support __ number of new students in __ number of new online courses. They will have to increase staff, but since they often train and employ student workers to answer the calls/emails, this is also a win/win for your campus to get more money to more students. Assuming your campus fairly treats and pays its student workers, of course. If not, make sure to fix that ASAP. But keep in mind that this can be done for the cost of hiring a few more workers to handle increased capacity and then paying to train everyone in support to take online learning calls.

Then there will be the cost of the technology itself. Typically, this is the LMS cost plus other tools and plug-ins you might want to add in (data analytics, plagiarism detection, etc). Personally, I would say to avoid most of those bells and whistles at the beginning. Some of them – like plagiarism detection – are surveillance minded and send the wrong message to learners. Hire some quality instructional designers (I’ll get to that in a minute) and you won’t even need to use these tools. Others like data analytics might be of use down the line, but you might also find some of the things they do underwhelming for the price. With the LMS itself, note that there are other options like Domain of One’s Own that can replace the LMS with a wider range of options for different teachers and students (and they work with single sign on as well). There are also free open-source LMS if you want to self host. Then there are less expensive and more expensive LMS providers. Some that will allow you to have a small contract for a small program with the option to scale, others that want a huge commitment up front. Look around and remember: if it sounds like you are being asked to pay too much, you probably are.

So a lot of what I have discussed is going to vary in cost dramatically, depending on your needs and current capacity. However, if you remain focused on just what you need, and maybe sharing part of certain salaries with other departments to get part of those people’s time, and are also smart about scaling (more on that later), you are still looking at a cost that is in the tens of thousands range for what I have touched on so far. If you hit the $100k point, you are either a) over-paying for something, b) way behind the curve on some aspect, or c) deciding to go for some bells and whistles (which is fine if you need them or have people at your institution that want them – they usually cost extra with OPMs as well).

The next cost that almost anyone that wants to go online will need to pay for no matter what you do is course development. Many people think they can just get the instructors to do this – but just remember that the course will only be as good as their ability/experience in delivering online courses. You may find a few instructors that are great at it, but most at your school probably won’t fall into that category. I don’t say that as a bad thing in this context per se – most instructors don’t get trained in online course design, and even if they do, it is often specific to their field and not the general instructional design field. You will need people to make the course, which is where OPMs usually come in – but also in-house instructional designers as well.

With an average of 6-8 months lead time with a productive instructor, a quality instructional designer can complete 2-3 three quality 15 week online courses per semester. I know this for a fact, because as an instructional designer I typically completed 9 or so courses per year. And some IDs would consider that “slow.” More intense courses that are less ready to transition to online could take longer. But you can also break out of the 15 week course mindset when going online as well – just food for thought. If you are starting up a 10 course online program, you would probably want three instructional designers, with varying specialties. Why three IDs if just one could handle all ten courses in two years easily? Because there is a lot more to consider.

Once you start one online program, other programs will most likely follow suit fairly quickly. It almost always happens that way. So go ahead and get a couple more programs in the pipeline to get going once the IDs are ready. But you also need to build up and maintain infrastructure once you get those classes going. How do you fix design problems in the course? When do you revise general non-emergency issues? What about when you change instructors? And who trains all of these instructors on their specific course design? What about random one-off courses that want to go online outside of a program? Who handles course quality and accreditation? And so on. Quality, experienced instructional designers can handle all of these and more, even while designing courses. Especially if you get one that is a learning engineer or that at least specializes in learning engineering, because these infrastructural questions are part of their specialty.

The salary and benefits range of an instructional designer is between 50K-100K a year depending on experience and the cost of living where you are located. These are also positions that can work remotely if you are open to that – but you will want at least one on campus so they can talk to your students for feedback on the courses they are designing. But remote work is something to keep in mind because you also have to consider the cost of finding an office and getting computers and equipment for each new person you want to hire (either as IDs or the other positions described). Also don’t forget about the cost of benefits like health care, which is pretty standard for full-time IDs.

Another aspect to keep in mind is accreditation – that will take time and people, but that will be the case even if you go with an OPM as well. You will need to pull in people from across the campus that have experience with this, of course – but you will also have to find people that can handle this aspect regardless of what model you choose. And it can be a dozy, just FYI.

Another aspect to consider is advertising. This is a factor that will always cost, unless you are focused solely on transitioning an existing on campus program into an online one (and not planning on adding the online option to the on-campus one). But even then, if you want it to scale – you will need to advertise. Universities aren’t always the best at this. If yours is, then skip ahead. If not, you will need to find someone that can advertise your new program. Typically, this is where OPMs tend to shine. But it is also getting harder and harder to find those that will just let you pay for advertising separate from the entire OPM package.

I can’t really say what you need to spend here – but I will say to be realistic. Cap your initial courses at manageable amounts – not just for your instructors, but also for your support staff. I can’t emphasize enough that it is better to start off small and then scale up rather than open the floodgates from the beginning. Every course that I have seen that opens up the first offerings to massive numbers of students from the beginning has also experienced massive learner trauma. Don’t let companies or colleges gloss over those as “bumps in the road.” Those were actual people that were actually hurt by being that bump that got rolled over. Hurt that could have been avoided if you started small and scaled up at a manageable pace.

So while we are here, let’s talk scale. Scale is messy, no matter how you do it. Even going from one on-campus course to two on-campus courses has traditionally led to problems. All colleges have wanted to increase enrollments as much as possible since the beginning of academia, so its not like OPMs were the first to talk or try scale. However, we need to be real with ourselves about scale and the issues it can cause.

First of all, not all programs can scale. Nursing programs scale immensely because the demand for nurses is still massive. Also, nurses work their tails off, so Nursing instructors often personally take care of many problems of scale that some business models cause. I’m still not sure if the OPMs involved in those programs have even realized that is true yet. But not all programs can scale like a Nursing program can. Not all fields have the demand like Nursing does. Not all fields have the people with the mindset like Nurses have (no offense hopefully, but many of you know its true and its okay – I’m not sure if Nurses ever sleep).

All that to say – if you are not in Nursing, don’t expect to scale like Nursing can. Its okay. Just be realistic about. Also, be honest about any problems that are happening. Glossing over problems will only cause more problems in no time. Always have your foot on the brake, ready to stop the scaling before issues spiral out of hand.

Remember: education is a human endeavor, and people don’t react well to being herded like cattle. I feel like I have only touched the surface and left out so much, but I am as tired of typing as you probably are of reading. Hopefully this is giving some food for thought for the people that have been wondering about in-house program development.

So why go in-house development rather than OPM? Well, I have been making the case for the cost-saving benefits plus capacity-building benefits as well. Recently I read about an OPM that wanted to charge $600,000 to build one 10 course program. All that I have outlined here plus stuff I left out would easily half of that for a high-quality program. And I am one of those people that usually advocates for how expensive online courses can be to do right. But even I am thinking “Whoa!’ at $600K.

Look, if you are wanting to build a program in a field like Nursing that can realistically scale, and you want to deal with thousands of students being pushed through a program (along with all the massive problems that will bring), then you are probably one of five schools in the nation that fit that description and OPMs are probably the best bet for you. For the other 3000-4000+ institutions in the nation, here are some other factors to consider:

  • Hiring people usually means some or all of those people will live in your community, thus supporting local economies better.
  • Local people means people on your campus that can interact with your students and get their input and perspective.
  • Having your people do things also typically means more opportunities to hire students as GTAs, GRAs, assistants, etc – giving them real world skills and money for college.
  • When your academics and your GRAs are part of something, they usually research it and publish on it. The impact on the global knowledge arena could be massive, especially if you publish with OER models.
  • Despite what some say, HigherEd is constantly evolving. Not as fast as some would like, but it is happening. When the next shift happens, you will have the people on staff already to pivot to that change. If not, that will be another expensive contract to sign with the next OPM.

The last point I can’t emphasize enough. When the MOOC craze took off, my current campus turned to some of its experienced IDs – myself and my colleague Justin – to put a large number of MOOCs online. Now that analytics and AI are becoming more of a thing in education (again), they are turning to us and other IDs and people with Ed-Tech backgrounds on campus as well. For people that went the OPM route, these would all be more (usually expensive) contracts to buy. For our campus, it means turning to the people they are already paying. I don’t know what else to say to you if that doesn’t speak for itself.

Also, keep in mind that those who are not in academia don’t always understand the unique things that happen there. Recently I saw a group of people on Twitter upset about a college senior that couldn’t graduate because the one course they needed wasn’t offered that semester. The responses to this scenario are those that many in academia are used to hearing: “bet there is a simple software fix for this!” “what a money making scam!” “if only they cared to treat the student like a customer, they wouldn’t make this happen!” The implication is that the problem was on the University’s side for not caring about course scheduling enough to make graduation possible. Most people in academia are rolling their eyes at this – it is literally impossible for schools to get programs accredited if they don’t prove that they have created a pathway for learners to graduate on time. It makes good business sense that not all courses can be offered every semester, just like many business do not sell all products year round (especially restaurants). Plus, most teachers will tell you it is better to have 10 students in a course once a year than 2-3 students every semester – more interaction, more energy, etc. But schools literally have to map out a pathway for these variable offerings to work in order to just get the okay for the courses in the first place. Those of us in academia know this, but it seems that, based on what I saw on Twitter recently, many in the OPM space do not know this. We also know that there is always that handful of students that ignore the course offering schedules posted online, the advice of their advisers, and the warnings of their instructors because they think they can get the world to bend to their desires. I remember in the 90s telling two classmates they wouldn’t graduate on time if they weren’t in a certain class with me. They scoffed, but it turns out they in fact did not graduate on time. So something to keep in mind – outside perspectives and criticism can be helpful, but they can also completely misunderstand where the problems actually lie.

And look, I get it – there will always be institutions that prefer to get a “program in a box” for one fee no matter how large it is. If that is you, then more power to you. There are a few things I would ask if you go the OPM route: first of all, please find a way to be honest and open about the pros and cons of working with your OPM. They may not like it, but a lot of the backlash that OPMs are currently facing comes from people just not buying the “everything is awesome” line so many are pushing. The education world needs to know your successes as well as your failures. Failure is not a bad thing if you grow from it. Second, please keep in mind that while the “in-house” option looks expensive and complicated, going the OPM route will also be expensive and complicated. They can’t choose your options for you, so all the meetings I discuss here will also happen within an OPM model, just with difference people at the table. So don’t get an inflated ego thinking you are saving time or money going that route. Building a company is much different from building a degree program, so don’t buy into the logic that they are saving you start-up funds. They had to pay for a lot of things as a for-profit company that HigherEd institutions never have to pay for.

Finally, though, I will point out how you can also still sign contracts with various vendors for various parts of your process while still developing in-house, like many institutions have for decades. This is not always an all-or-nothing, either/or situation (see the response from Matthew Rascoff here for a good perspective on that, as well as Jonathan D. Becker’s response at the same link as a good case for in-house development). There are many companies in the OPM space that offer quality a la carte type services for a good price, like iDesign and Instructional Connections. Like I have said on Twitter, I would call those OPS (Online Program Support) more than OPM. Its just that this term won’t catch on. I have also heard the term OPE for Online Program Enablers, which probably works better.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.

After the Cambridge Analytica Debacle, Is It Time to Ban Psychographics?

What are psychographics you may ask? Well, you may not, but if so: the simple definition is that they are a companion to demographics, but they try to figure out what those demographics tell us about the person behind the demographic factors. This article looks at some of the features that could go into psychographics, like figuring out if a person is “concerned with health and appearance” or “wants a healthy lifestyle, but doesn’t have much time” or what ever the factor may be. This article was written in 2013, long before the Cambridge Analytica debacle with Facebook. That entire debacle event should have people asking harder questions of Ed-Tech, such as:

Audrey Watters will surely be writing about her question and more soon (its a huge topic to write about already), and Autumm Caines has already weighed in on her experiences investigating Cambridge Analytica long before most of us were aware of them. Like many people, I had to dig up some refreshers about what psychographics are after Audrey Watters’ tweet to make sure I was thinking of the right concept. And now I want to question the whole concept of psychographics altogether. Maybe “ban” is too strong of a word, maybe not. You can be the judge.

Even in the fairly “innocent” tone of the 2013 article I linked above, there are still concerning aspects of psychographics shining through: interview your customers with the agenda of profiling them, and maybe consider telling them what you are doing if they are cool enough; you can’t trust what they say all the time, but you can trust their actions; and “people’s true motivations are revealed by the actions they take”

But really, are they? Don’t we all do things that we know we shouldn’t sometimes, just like we sometimes say things we know we don’t believe sometimes? Isn’t the whole idea of self-regulation based on us being able to overcome our true motivations and do things we know we need, even if we aren’t truly motivated?

The whole basis of psychographics in this article is that you can trust the actions more than the words. I’m not so sure that is true, or really even a “problem” per se. We are all human. We are inconsistent. We change our mind. We don’t do what we say should, or do things that we say we shouldn’t at times. It is part of being alive – that makes life interesting and frustrating. It’s not a bug in the system to be fixed by trickery.

(Side note: anyone that really digs into psychographics will tell you that it is more complex than it was in 2013, but I don’t really have a stomach to go any more complex than that.)

So is it really fair and accurate to do this kind of profiling on people? At certain levels, I get that companies need to understand their customers. But they already have focus groups and test cases and even comment cards to gather this data from. If they don’t think they are getting accurate enough information from those sources, why would they think they could get even more accurate information from trickier methods? Either way, all words and actions come from the same human brain.

Look at the example of what to do with psychographics in marketing in the 2013 article. That whole section is all about tricking a person to buy a product, via some pretty emotionally manipulative methods. I mean, the article flat out tells readers to use a customer’s kids to sell things to them: “Did she love the one about the smiley-face veggie platters for an after-school snack? Give her more ways to help keep her kids eating well.”

Really?

What about just giving her the options of what you sell and what they are for, and let her decide what she needs?

And what if she starts showing some signs of self-destructive behavior? If the psychographics are run by AI… will it recognize that and back off? Or will it only see those behaviors as signs of what to sell, and start pushing items to that person that they don’t need? Do you think this hasn’t already happened?

Maybe I am way off base comparing psychographics to profiling and emotional manipulation. I don’t think I am. But if there is even a chance that I am not off base, what should our reaction be as a society? I don’t want to go overboard and even go so far as to get rid of customer surveys and feedback forms. If a company gives me a form designed in a way that lets me tell them what kind of ads I want to see, I wouldn’t mind that. Well, in the past I wouldn’t have minded. After Cambridge Analytica, I would want believable assurances that they would stick with what I put in the form and not try to extrapolate between the lines. I would want assurance they aren’t doing… well… anything that Cambridge Analytica did. [Insert long list of ethical violations here]

But would most companies self-regulate and stay within ethical limits with all that data dangling in front of them? Ummmmm…. not so sure. We may need to consider legislating ethical limits on these activities, as well as outright banning others that prove too tempting. And then figure out how to keep the government in-line on these issues as well. Just because Cambridge Analytica and Facebook are in the hot-seat this week, that doesn’t mean some government department or agency won’t be in that same seat tomorrow.

We are the Monster at the End of the Book

I wanted to circle back to a thought I had while reading Maha Bali’s excellent post Reproducing Marginality? The whole post is excellent, but one line made me think more than others. In it, she quotes something that she wrote with Paul Prinsloo and Kate Bowles that says:

…for most of us not in the US (or the UK), this [edtech] vision has often signalled top-down, US-to-world, Anglo-oriented, decontextualized, culturally irrelevant, infrastructure-insensitive, and timezone-ignorant aspirations, even when the invitation for us to join in may be well-intentioned.

Many of us in the Western world of EdTech are trying to figure out how to fix Education and Ed Tech, looking for the evil monsters out there that are causing the problems, and then fixing those monsters with research, technology, design, or methods.

And sometimes we are afraid to see what those monsters are that are damaging education, because they may be too big for us to fix.

This all reminds me of one of my favorite books as a kid: The Monster at the End of the Book.

mon001

In this book, Grover notices the title of the book and spends every page trying to stop you, the reader, from reaching the end of the book. He nails pages together, builds brick walls, and pleads with you NOT to get to the end of the book and face the monster lurking there.

hqdefault

Grover is terrified of the monster at the end of the book. But when he gets to the end of the book, he finds that he was the monster all along and that he had nothing to fear.

We (in the western world) are pretty much the monster at the end of the book when it comes to education reform. We are doing everything we can to avoid that possibility – looking to everything but ourselves to fix the problems. But is is our (sometimes) extreme ethno-centrism, socio-cultural centrism, whatever you want to call it, that is the problem all along. I would even go so far to say that as long as we are the center of the education world, we are always going to be the problem.

edugeek-journal-avatarEducation is about learning. Learners do the learning. Learning needs to be the center of what we do. Learners can live anywhere in the world, in any context. We need to examine the structures that keeps the wrong things at the center of education. We need to skip to the end of the book, realize we are the monster at the end of the book, and turn the story around. Learner agency is the only true “innovation” was have left to explore deeply in the education world.

Big (Scary) Education (Retention) Data (Surveillance)

Big data in education might be the savior of our failing learning system or the cement shoes that drags the system to the bottom of the ocean depending on who you talk to. No matter what your view of big data is, it is here and we need to pay attention to it regardless of our views.

My view? It is a mixture of extreme concern for the glaring problems mixed with hope that we can correct course on those problems and do something useful for the learners with the data.

Yesterday at LINK Lab we had a peak behind the scenes at a data collection tool that UTA is implementing. The people that run the software at UTA are good people with good intentions. I also hope they are aware of the problems already hard coded in the tool (and I suspect they are).

Big Data can definitely look scary for a lot of reasons. What we observed was mostly focused on retention (or “persistence” was the more friendly term the software uses I believe). All of the data collected basically turns students into a collection of numbers on hundreds of continuums, and then averages those numbers out to rank them on how likely they are to drop out. To some, this is scary prospect.

Another scary prospect is that there is the real danger of using that data to see which students to ignore (because they are going to stick around anyways) and which students to focus time and energy on (in order to make the university more money). This would be data as surveillance more than educational tool.

While looking at the factors in this data tool that learners are ranked by led to no surprises – we have known from research for a long time what students that “persist” do and what those that don’t “persist” do (or don’t do). The lists of “at risk” students that these factors produce will probably not be much different from the older “at risk” lists that have been around for decades. The main change will be that we will offload the process of producing those lists to the machines, and wash our hands of any bias that has always existed in producing those lists in the first place.

And I don’t want to skip over the irony of spending millions or dollars on big data to find out that “financial difficulties” are the reason that a large number of learners don’t “persist.”

The biggest concern that I see is the amount of bias being programmed into the algorithms. Even the word “persistence” implies certain sociocultural values that are not the same for all learners. Even in our short time looking around in the data collection program, I saw dozens of examples of positivist white male bias hard coded in the design.

For example, when ranking learners based on grades, one measure ranked learners in relation to the class average. Those that fell too far below the class average were seen as having one risk factor for not “persisting.” This is different than looking at just grades as a whole. If the class average is a low B but a learner has a high B, they would be above the class average and in the “okay” zone for “persistence.”

But that is not how all cultures view grades. My wife is half Indian and half Australian. We have been to India and talked to many people that were under intense stress to get the highest grades possible. It is a huge pressure for many in certain parts of that culture. But even a low A might not register as a troubling signal if the class average is much lower. But to someone that is facing intense pressure to get the best grades or else come home and work in Dad’s business… they need help.

(I am not a fan of grades myself, but this is one area that stuck out to me while poking around in the back end of the data program)

This is an important issue since UTA is designated as a Hispanic Serving Institute. We have to be careful not get into the same traps that education has fallen into for centuries related to inequalities. But as our LINK director Lisa Berry pointed out, this is also why UTA needs to dive into Big Data. If we don’t get in there with our diverse population and start breaking the algorithms to expose where they are biased, who else will?  Hopefully there are others, but the point is that we need to get in there and critically ask the hard questions, or else we run the risk of perpetuating educational inequalities (by offloading them to the machines).

For now, a good place to start is by asking the hard questions about privacy and ownership in our big data plan:

Are the students made aware that this kind of data is being collected?

If not, they need to be made aware. Everywhere that data is collected, there should be a notification.

Beyond that, are they given details on what specific data points are being collected?

If not, they need to know that as well. I would suggest a centralized ADA-compliant web page that explains every data point collected in easy to understand detail (with as many translations to other languages as possible).

Can students opt-out of data collection? What about granular control over the data that they do allow to be collected?

Students should be able to opt out of data collection. Each class or point of collection should have permissions. Beyond that, I would say they should be able to say yes or no to specific data points if they want to. Or even beyond that, what about making data collection opt-in?

Who owns the students’ data (since it is technically their actions that create the data)?

This may seem radical to some, but shouldn’t the student own their own data? If you say “no,” then they should at least have the right to access it and see what is being collected on them specifically.

Think of it this way: How will the very substantial Muslim population at UTA feel about a public school, tied to the government, collecting all of this data on them? How will our students of color feel about UTA collecting data on them while they are voicing support for Black Lives Matter? How would the child of illegal immigrants feel about each class at UTA collecting data about them that could incriminate their parents?

edugeek-journal-avatarThese issues are some of the hard things we have to wrestle with in the world of Big Data in Education. If we point it towards openness, transparency, student ownership, and helping all learners with their unique sociocultural situations, then it has potential. If not, then we run the risk of turning Big Education Data into Scary Retention Surveillance.

Humanize Them All, and Let Them Sort Themselves Out: #dLRN15 Reflections

So now that the #dLRN15 conference is over, its time for the post-conference reflections to begin. As one of the organizers, I wanted to say a heartfelt “thank you” to everyone that presented, spoke, moderated, slacked, tweeted, blogged, organized, commented, questioned, thought, and attended. I was legitimately concerned over whether or not this conference would “click” with those that attended. But it seems from the tweets, slacks and blog posts that many things did click at some very deep levels.

One thing (out of many) that really stuck out to me was how the word “disruption” seemed almost completely absent from any conversation. While the concept of disruption has been incredibly popular recently, many have rejected the idea from the beginning. Education can’t really be disrupted because it has always been changing (even if too slowly for many). Even if education could be disrupted, would we really want it to be? Disruption can’t be predicted or really even controlled, while typically producing inferior products. For example, mp3s are compressed audio files that produce lesser quality audio experiences when compared to CDs – and you usually don’t even get liner notes. Had there been any ability to control the mp3 disruption, we could have at least utilized lossless technology (like FLAC) and kept the liner notes. Society has mostly accepted an inferior technology audio because of disruption.

To me, a more effective discussion focuses on the change agents that have affected education in small and large ways. Technology is an educational change agent; online education is a change agent; political agendas are change agents. Change agents – while possibly moving at a slower pace – have a greater potential to be influenced and directed for good or bad (or both) than disruption does.

One change agent that we can and should push and influence is the humanization of education, more specifically the designs and technologies we utilize to educate people. This was one of the major themes at #dlrn15: how do we rediscover the people at the center of everything we do in education? My firm belief is that all of our work, policies, discussions, and technology needs to be re-framed with people at the center.

Take my presentations, for example. On the surface, many call the dual-layer model a “MOOC innovation.” Before the conference, I looked at it more as an “instructional design innovation.” And I still do, but I need to start highlighting more that it was not an innovation for innovation sake. The goal of the dual-layer model is to humanize education by creating a practical design for individualized learning. The dual-layer model is an attempt to teach learners how to learn, so that they will realize the epistemological, ontological, even political ideals inherent in all tools (and therefore choose which one to use at any given time accordingly). This power shift is one of many ways to place people at the center of education rather than technology.

Or take larger issues, for another example. We are beginning to understand that where you are born will determine whether you even get to go to college more than any other factor. We tend to look at this as problem to be solved just because it sounds bad. But we need to reframe this as a human problem, by realizing the de-humanizing affect that these statistics have on the people most affected by them. Our tendency is to focus on solving the problem for the sake of solving the problem: those that are least likely to attend college hear that they probably won’t make it into college just because they were born into a lower socio-economic level and won’t even try. However, our focus should not be on solving a problem, because our tendency will be to come up with a one-size-fits all solution based heavily on our own context. Complex problems often involve multiple solutions from many different contexts. We need to re-frame these issues to focus on the people at the center of them, so that we can find solutions that work in their actual, human, real-world context. As Maha Bali put it in our ontology panel, standing on the shoulders of giants doesn’t work for her because those giants were not in her context.

Humanize all people, all issues, all change agents, all technology. All of it. All of them.

The other #dLRN15 theme that resonated most with me is listening to students. Education tends to de-humanize our students by classifying them based on how we think they should be classified. As the “experts,” we sort them out based on our classifications and then tell them what they need the most from us. There is value in that to some degree. But why not let the learners sort themselves out, and then offer our services as guides, mentors, fellow pilgrims on the path to “education”? Where are we creating spaces for them to ask hard questions, fail, get back up, learn outside the curriculum, pick apart a tangent, speak for themselves… in other words, be academics rather than our projects?

edugeek-journal-avatarOh yeah – we don’t really let many academics be academics any more. Maybe we should look at this from all levels. Admins: humanize all of your faculty and staff, and let them sort themselves out. Admin, instructors, and instructional designers: humanize all of your learners, and let them sort themselves out. And you could also say: students: humanize all of your instructors, and let them sort themselves out.

Is It Really Possible to Re-do Ed Tech From Scratch?

Jesse Stommel and Sean Michael Morris asked an interesting question at Hybrid Pedagogy a couple of days ago: “Imagine that no educational technologies had yet been invented — no chalkboards, no clickers, no textbooks, no Learning Management Systems, no Coursera MOOCs. If we could start from scratch, what would we build?”

I’m a bit perplexed as to where to start. Its a great question. But would it even be possible to surgically remove educational technology from the larger world around them? So much of our technology is connected to external contexts that it may be impossible to even consider. Can we really imagine a world without books? The line between textbook and book is so blurred… probably not.

My concern though is that our field focuses too much on “how technology is shaping us” and not enough on how much we shape our technology. All technology tools have underlying (and often times not so underlying) ontologies, epistemologies, and so on. We could start from scratch, but if we don’t get rid of the dominant mindset of “instructivism/behaviorism as the one-size-fits-all solutionism” that is so prevalent in Ed Tech – we will end up with the same tools all over again.

However, I wouldn’t start over from scratch with technology as much as I would with theory. I would put active learning as the dominant narrative over passive learning. I would pull ideas like connectivism and communal constructivism up to the same level as (or higher than) instructivism. I would dump one size-fits-all positivism and replace it with context-morphing metamodernism. I would make heutagogy/life-long learning the ending hand-off point of formal education, as opposed to having formal education with a pedagogical “end goal.” I would get rid of the standardization of solutions and replace this ideology with one of different contexts and different solutions for different learners. I would go back in time and make people see the learner as the learning management system instead of a system or program. I would switch from instructor-centered to student-centered at every juncture. And so on.

edugeek-journal-avatarIf we don’t get the right theories and ideas in place in the first place, we will just continue evangelizing people to the same tech problems we have always had, even if we are able to somehow start over from scratch. In other words, the problem is not in our technologies, but our beliefs and theories. Our Ed Tech follows our theory, not the other way around.

(image credit: Gustavo Fiori Galli, obtained from freeimages.com)

Psuedo-Buzzword Soup: Metamodernism and Heutagogy

I learned a hard lesson this week: don’t tweet details about conference proposals before they get accepted. People will get excited about seeing the session, and then you might get rejected. Then you have to go back and break the bad news to everyone.

I have been rejected for conferences many, many, many times, but this one was the first one that was very hard for me. I spent more time and late nights on it than I probably should have, crafting a specific proposal to (in my mind) perfectly match the conference goals. One of my co-workers was visibly shocked that it got rejected. I guess both of us were giving the proposal more credit than it deserved :)

However, since some people on Twitter were interested in it, I decided to share this idea and let my ego take the hit it probably deserves when people see what it was actually about (I’m kidding, but would appreciate any feedback whether you like it or hate it). So, here is the title, the abstract, and some thoughts on where the paper would have possibly gone:

Embracing Heutagogical Metamodernist Paradox in Education: Self-Regulated Courses with Customizable Modalities

Abstract: Most formal or informal educational experiences tend to follow a linear pathway through learning content and activities. Whether these experiences are designed as student-centered or instructor-centered modalities that construct or deconstruct knowledge and skills, learners are still required to stick to a singular pathway through content with the instructor in control of the modality at every point of the course (even if several side paths or options are given). However, new instructional design ideas are challenging these single pathway designs in ways that truly transfers power from instructors to learners. Based on the often overlooked theoretical lenses of heutagogy and metamodernism, these new designs create true learner-centered experiences that utilize customizable pathways through self-regulated courses. This conceptual paper will examine the theories of heutagogy (learning how to learn instead of what to learn) and metamodernism (a cultural narrative that paradoxically embraces modernism and postmodernism), as well as how these ideas relate to education. These theoretically lenses will be used to lay out the basics of dual-layer course design that allows for customizable course modalities. The goal of a customizable modality course design is to encourage learners to self-regulate their own learning through various modalities (layers) by choosing one modality, all of the modalities, or a custom combination of different modalities at different points in the course. The challenges, limitations, desired contexts, and possible benefits of these designs will also be examined. The goal of this paper will be to lay the groundwork for current and future research into dual-layer customizable modality course design.

The bigger picture behind this is that when most people talk about change in higher ed, they are thinking of a specific lens, viewpoint, paradigm, etc. These usually range anywhere from “burn the whole thing down” to “we are on the right path, we just have to be patient because change takes time.” These specific lenses are usually presented to people with the same lens, but rarely do people take into account how their lens doesn’t work for those with other lenses. Their lens is presented as the One Lens that will rule all other lenses. Even beyond that, sometimes the narrative is that those other lenses have to be thrown out to accept the One True Lens.

This, of course, does not sit well with those that accept another lens or set of lenses. And this is probably why we often see slow progress on actual change in education – we are looking for one lens or set of lenses to fix everything – but everyone has different needs, perspectives, etc.

The emerging ideas of metamodernism and heutagogy are not necessarily trying to replace older ideas of modernism and post-modernism or pedagogy and andragogy, but are rather a call to expand those ideas to include the others. They are both pragmatic ideas that basically say “the old ideas had good and bad points… but the parts that were good and bad also tend to change depending on context… so let’s learn when to use these various lenses, when to combine them, and when to reject them on a context by context basis.” In other words, the answer lies in accepting that all solid answers are possible answers at different times.

edugeek-journal-avatarI know I sound like an old hippie strung out on some drug we still don’t have a name for, so I get why these ideas are a hard sell in educational circles. Educators want neat, tidy ideas with clear objectives, no chaos, minimized complexity, and for goodness sake – don’t confuse the learners! We have to teach them to think for themselves by removing every possible obstacle that would cause them to think for themselves to overcome. Wait… what?

(image credit: Patrick Moore, obtained from freeimages.com)

What If The Problem Isn’t With MOOCs But Something Else?

Is this another post about how MOOCs are misunderstood ideas that the critics all get wrong? Not quite. There are problems with MOOCs, but I’m still looking at the conversation about MOOCs in general (continuing from my last post kind of). The general conversation about MOOCs (and for that matter other ed tech innovations such as flipped learning, gamification, etc) tends to be all over the place: insightful, missing the forest for the trees, really odd, kind of just there, etc. All of that is great and makes for interesting discussion. One of the concepts that seems to be getting more traction the past few weeks is “motivation.”

The article about “Why Technology Will Never Fix Education” has already been the subject of many insightful observations. I want to zoom in on one part:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn’t the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure.

I don’t think we can just pass over that last statement with just a simple “for good or ill.” There is a lot of “ill” with that carrot that needs to be unpacked. In an article that very correctly examines the problems of inequality in education, a huge systemic problem is skipped over.

Of course, this article is not the only one. Many other articles have pointed at “student motivation” as being a huge problems with MOOCs. MOOCs are like any other education idea: subject to good and bad instructional design. So you shouldn’t blame the overall idea when learners are just getting bored with bad instructional design. But even beyond that, the above quote speaks to how our system in the U.S. relies on motivational techniques that are predominantly extrinsic in nature. We spend decades indoctrinating learners with this context, and then when an idea comes along that relies mostly on intrinsic motivation, we blame the idea itself rather than our system.

What if MOOCs are just a mirror that shows us the sociocultural problems we don’t want to deal with in our system?

What if the problem is not with the learners, but the way they have been programmed through the years? Grades, credits, failure, tuition, fees, gold stars, extra recess for good grades, monetary rewards, etc are all programmed into learners from a young age.

You can say MOOCs are failing because they lack sufficient “student motivation,” but what if it was actually the case that society has been failing for decades and MOOCs are just exposing this?

Of course, we all recognize many ways that society is failing in education. But what if there are other ways? What if relying on too much extrinsic motivation is a failure? What if we are failing to embrace all of the current and historical research in motivation? What if we know a lot about motivation, but fail to real utilize any of that knowledge? On Twitter yesterday, Rolin Moe pointed out that he never reads discussion of Herman Witkin, cognitive styles, field dependence/independence, etc in relation to motivation. In my circles, I have heard Witkin brought up, but to be honest – I can’t recall anyone trying or applying his ideas (kind of in the same way people in education rattle off Skinner or Bandura and then just don’t really use any of their ideas). These are all ways that our educational system was failing just in the area of motivation for decades before MOOCs (or many other Ed Tech ideas) even came along.

Yet what happens is that the ideas like MOOC are blamed for the historical failure of the system, and those that feel more comfortable within that system recommend pulling the wild ideas back in to make them look more like the existing system. Just think about it: what are the recommendations for fixing “student motivation” in MOOCs? Find a way to add back extrinsic motivation!

I would say: no. We need to find a different path. In fictional entertainment, one of the foundational constructs is to reach for is “suspension of disbelief.” You have to help the readers come to a place of either gaining interest in your story or believability in the fiction elements so that they suspend skepticism and engage the story. Traditional education has typically sought for a “suspension of laziness” – looking for ways to get learners to get off their rears and learn (because we always assume that when they don’t want to learn it is their motivation instead of our design). Newer ideas like MOOCs are going past that, to what I guess could be called “suspension of extrinsic motivation” (for lack of better words). What does learning design look like when you remove all of these carrot sticks (or actual paddling sticks) and leave learners to just pure learning? Well… maybe purer learning than what we had.

edugeek-journal-avatarThere are many, many more angles to explore here (not to mention problems with extrinsic/intrinsic motivation constructs), but I am already getting long-winded. The important idea to consider is that instead of pulling emerging technology and design back towards the tradition of what we already know (which is actually a power struggle by those in power), we need to push forward towards the direction that we already know we need to go.

(image credit: Manu Mohan, obtained from freeimages.com)

The Mirage of Measurable Success

The last post that I wrote on measuring success in MOOCs created some good, interesting conversation around the idea of measurable success. The most important questions that were asked had to deal with “why even offer dalmooc if you don’t know what measurable success would look like?”

That’s a good question and one that I think can be answered in many ways. Honestly, the best answer to that question is “because four world-renowned experts wanted to teach it and a lot of people signed up to take it.” To me, especially in the informal realm of education where dalmooc existed, that is one of the biggest measurable signs of success. We live in a world that is so full of compulsory education and set degree plans that we forget that choosing to sign up for an informal voluntary learning experience is measurable success in itself. Over 19,000 people initially said “that sounds interesting, sign me up,” with over 10,000 signing in at one point or another to view the materials. Hundreds of participants were active on Twitter, Facebook, EdX forums, ProSolo, Google Hangouts, and other parts of the course. All voluntarily.  To me, that is measurable success.

Another area of measurable success, although definitely more on the qualitative side, is what I covered in the last post:

So maybe when the participants use words like “blended” or “straddled” or “combined,” those statements in themselves are actually signs of success. In the beginning, some claimed that DALMOOC was more cMOOC than xMOOC, but by the end others were claiming it was more xMOOC than cMOOC. Maybe all of these various, different views of the course are proof that the original design intention of truly self-determined learning was realized

To clarify this a bit more, there are those that thought that dalmooc was more instructivist / xMOOC:

And then there were those that thought it was much more connectivist / cMOOC (myself included)

So to me, that is another realm of measurable success – learners came out of the experience with vastly different views on what happened. That was a goal we had.

However, I know that when people talk about “measurable success,” they are usually referring more to standardized test results, student satisfaction, completion rates, and – the holy grail of education – grades! The elephant in the room that many people won’t deal with, but we all know is true, is that these measures of success are often a mirage.

Standardized tests are probably the biggest mirage of all. The problem is that a score of 90% on a test really only means that a learner was able to mark 90% of the questions correctly, but not necessarily that they actually understood 90% of the material. They may have only understood 60% of it and guessed the next 30% correctly. The fact that the right answer is somewhere in a list of multiple choice answers should negate their usefulness as a way to measure success, but our society still chooses to ignore this problem. Then you can add into this mix that most multiple choice questions are poorly written in ways that give away the answers to people that are taught how to game them (like I was).

Then there is the problem of coming up with questions for tests. Some tests contain, say, two questions about the core knowledge that learners should have gained and then a whole lot of related trivia that they could just Google if needed. Yet they could still get the two essential questions wrong and all the rest correct and will be labeled as “mastering” the concept. Rubrics for papers or projects often do the same thing – giving most points to grammar and following instructions and few to actual content mastery. Someone could write a great paper that shows no knowledge of the topic at hand but still pass because they got all other areas perfect.

Add to this that we would compare two children to each other based on this false sense of “success.” One child could have tanked a test based on the trivia but got all of the core content correct and still be labeled as less successful than the one that got the trivia right and core knowledge wrong…. just because its all on the same test. Oh, and let’s not forget the practice of giving similar or equal weights to all questions on a test when not all questions are really equal. Again, two learners could get the same score, but one only answered the easy questions correctly while the other answered all of the challenging ones correctly.

And speaking of different learners, there is always the oft-ignored problems of cultural bias in testing and learning.  Are learners not testing well because they didn’t learn, or were there cultural references on the test they didn’t get? Did a learner really learn the content, or were they just able to quickly memorize some factoids because of some weird thing Aunt Ida said about planets that helped them connect the new information to this weird family quirk? Are they being labeled as smarter because they are or because their weird Aunt Ida gave them a memory that helped them memorize?

Most of what we call “measurable success” in education is really just a mirage of numbers games. For those like me that fell on the privileged side of those games, it was a great system that we probably want to fight to keep. And we are probably most likely the ones now in control, so….

Now, of course, this is not to say that learning isn’t happening. This is more about how most institutions measure learning and success. I believe people are always learning formally and informally, even if its not always what they had intended to learn. It just takes a lot of time, effort, and money (yes – money!) to truly assess learning, and the educational field in general is being tasked with the opposite. “Do better assessment with less money, less time, and less effort (ie people power)!” No real easy answers, but there is a problem with the system and the culture that drives that system that needs to be addressed before “measurable success” becomes a trustworthy idea.