Updates on the Never-Ending Reclaim Project

So with all of the weirdness that is going on in the Ed Tech world recently and the general world today, I needed something to take my mind off of things. I wanted to add a quick update about my Never-Ending Reclaim Project at the end of that post… but it ended up being too long! So, in the interest of archiving the good, the bad, and the ugly of what I am finding out there (not all of it is being kept even if I am reclaiming access)…. here are some interesting (to me, at least) updates of where things are.

First of all, its pretty weird trying to make sure you have ownership of every account you have created. Random things in life suddenly remind you of things you had totally forgotten. Walking by a store one day reminds you “oh, hey – RedBox still exists and I think I had an online account there as well.” Or a random link reminds you that you also had a Reddit account at one time. All reclaimed!

I finally came to a place of acceptance with the not-quite-perfect html exports of WordPress sites. It seems that everything from site suckers to WP plugins just don’t get what relative truly means. Or maybe I just don’t get the settings correct? Anyways – it seems they always add a slash at the beginning of base level files like this: “/images/picture.jpg” or “/css/style.css” or whatever.  That forces my computer and the websites where I deposit them to look in the base directory for everything – but I am trying to get them to go in a sub-folder of an “archive” folder. So the browser just sits there forever trying to figure out what is going on. For less complex websites, its easy enough to remove that slash quickly (“images/picture.jpg” or “css/style.css” or whatever) – and boom! instant relative website that can work online or offline where ever I put it. When archiving WordPress sites with complicated folder structures, it takes a bit of thinking to know how many “../” or “../../” etc to replace those “/” with – and time consuming if you have to think through all “/” in your document.

There is one workaround to make it a bit easier. I have found exporting from within WordPress to be a bit better than external site suckers, because WordPress will still get you all of your orphaned files and pages. This means that bad link you didn’t realized was there can be fixed with one edit, rather than jumping into archive.org to hope and pray that the file is there (only about 50/50 record of that so far for me, unfortunately). Plus, you can hard code a long link with your website address in there – making finding and replacing absolute links with relative “../../” links very, very quick and easy per page. Which I wrote about before – but it’s the best option I have found so far.

The reason this is important is because the old LINK website has bit the dust for now it seems. This was apparently a problem with Google and not the people running the site. They tried everything they could to renew the website registration – but it was originally registered through Google. Let me warn you: don’t do that. It starts easy enough to register… but renewals get harder and more complicated each time. I experienced this a couple of years myself – and it just got worse after that.

Anyways, I was able to get html archives of all LINK lab sites just in case something went wrong (again, it just seemed inevitable the way Google was going). So I have html back-ups of DALMOOC, Pivot to Online Learning MOOC, Open Ed MOOC, etc. Most of these are hard coded to work on my personal website – but I have been able to get DALMOOC converted over to true relative html. I can easily move that folder where ever I want – or send the files to whatever archive site the good folks still bearing the LINK torch set up for LINK Lab. I will work on the other courses as I get time as well.

The other weird thing that happened is that I actually got control of my MySpace account back! The form that I linked to in the last post… actually worked? I mean, it took over a month to hear anything, but I am back in. And it is a sad wasteland in there. Almost all real data is gone – and only a few pictures remain of the many I uploaded. But I now control my corner of the wasteland at least.

I also was able to somewhat re-create the custom profile I made back in the day. The html template I found on GitHub was cool, but also several years beyond the last version I had used. My resurrected custom code didn’t work. But I poked around in archive.org and found a save of Tom’s profile from the date that I saved my custom code. I put the two together, and BAM! I had my profile back in html! Well, it was Tom’s profile styled like mine. So I started replacing Tom’s information with mine as best as I could remember it (or using Latin sample text where I couldn’t). I also found a way to make an image of the profile music player that plays the sample of music that I had on there if you click it.

Now… before I share the link, please keep in mind that I realize this profile has some cultural appropriation. At the time, I was married to someone that traced their heritage back to India, so I was trying to mix her heritage and mine (Irish) on my MySpace page. But anyways – today I would replace the Hindi and sitar (yes I did actually learn to play a few songs on it, even though I have forgotten how) with something from my cultural background. But this is what it was back in the day.

Now, if only I could get the the Foursquare/Swarm people to be as…. umm… “responsive” as the MySpace team…

I also seem to have found some of the limitations of Ruffle – you can’t really import external files (images, other SWF files, etc), which I did a lot in tthe E-SPY X-500. So I just had to link to an external list of the lessons that I wanted to import into the game. I set it up that way because we wanted to be able to upgrade the lessons as needed without re-doing the entire game. For example, the Tobacco Lesson 11 lets the student build a simple Tobacco awareness website – it was pretty basic, but we had bigger plans to make it more robust. But at least it works as originally designed now. Oh, and you have to use the back button to get back to the list.

I also found that many ActionScript functions don’t work in Ruffle, like the code that makes text scroll within small boxes. Oh, well. Maybe there has been an update that I need to look into.

After doing some poking around on Digg and Delicious, it seems that my original Digg account is gone forever (unless someone knows of a way to log in with email?), but Delicious is still around. Kind of. I was able to log in and export my posts from there. It seems like it is just a data repository of your old stuff (can’t add new stuff), but that is a start. You can export to JSON and HTML formats – if you can remember your password (it seems like the password reset function is not implemented yet). The html format also doesn’t look that great, and it saved the tags and dates even though they aren’t displayed. So I decided to grab the html and CSS from their site to make my archive look a lot cleaner. I also decided to go for 60 resultsw per page rather than 20, because mine were all short “Ed Tech new updates” type things anyways.

Anyways, I find this type of stuff fascinating. Some of you might think I am trying too hard to get stuff that should be forgotten, and maybe you are right. Especially after seeing how my old MySpace profile looked. I still need to find a way to convert old Flash files to html5 (without buying an Adobe subscription). I also wonder if I can find a site that emulates old installs of LAMP so I can get a 14 year old export of WordPress working again (WP tells me its too old to import now – boo!). More things to look into!

The Never-Ending Reclaim Project Continues

Like many of you, I have been spending a considerable amount of time reclaiming my data and spaces online. A lot of that is focused on downloading and archiving my data (especially blog posts, reviews, comments, etc) from a myriad of websites I have used through the years. Well, decades now. I don’t know if this post will be of interest to anyone, but it will be a record (Jim Groom-style) for me – and hopefully someone will stumble across a couple of problems I have run into and have some suggestions for me.

So this all started several years (or more) ago when I ran into the idea of the IndieWeb and realized I didn’t have to lose data to dying websites like MySpace and Jaiku. I could take a proactive approach by collecting my information and storing it on my own (and the awesome folks at Reclaim Hosting make it super easy in many ways). So I started downloading data from various websites, and importing blog or informational posts from any website that I could. Then I realized two email addresses I used for a lot of websites through the years could possibly die someday, so I started going back to where ever I could find those email addresses and reclaimed access to those services. Which was mostly on a bunch of dead or dying websites, but it uncovered more posts and blogs to archive. Then several unexpected unfortunate events happened to me last year and this year. Finding out my job in academia was being eliminated caused me to comb through 15 years of signing up for all kinds of services and journals and all kinds of things to discover even more stuff to reclaim. Then an unexpected divorce also caused me to have to comb through even more stuff online, causing even more stuff to reclaim to come to light. So here are the basics of what I found out.

Downloading your data from websites is usually the most straight forward process, as long as the site offers a data download option or an export feature for your posts. One thing I have noticed is that the data that is downloaded does change from time to time – for instance, a good friend of mine suddenly died a few years ago and his family deleted all of his online accounts. So now there are posts on Facebook where he and I had long conversations that just look like I am arguing with myself. So instead of deleting previous data downloads with new, fresh downloads – I keep an archive of past exports. Did a past one capture those conversations that are now one-sided? I don’t know, but I should go look. I really hope so.

Then there were things like Jaiku that are long gone, but I never got a chance to download the data. Bummer. However, thanks to the work of the Internet Archive I did find a lot of my Jaiku posts in their archives. So I decided to copy the html and stitch together my own archive of some my jaikus – including a few comment that I could also find and some pages from the Jaiku site just for nostalgia. Clicking on any avatar on that page leads to me. Some of the other links work as well. But this little archive shows that even 12 years ago Jaiku was way more interesting than Twitter. I also archived as much as I could of the EduGeek Journal Jaiku channel as well. Interesting that this is where Twitter Hashtags directly got the # from (even though technically it came from older sources, it was Jaiku’s Channels that made Twitter users start using the # to mimic the function).

One site that is sadly long gone is MySpace. I can’t even sign in or reset my password anymore (probably hacked a long time ago). But the important data is gone – it seems MySpace lost or deleted most of it. I should have captured the html and custom CSS I worked for hours on way back in the day. But even the mighty Internet Archive didn’t capture any of that. However, after digging around some, I found this form to submit a support ticket, and then a GitHub project that has Tom’s MySpace profile html. And then searching through my files at home, of course I kept a copy of the CSS I created to customize my profile. So I might have to just make up a bunch of stuff about myself to replace the stuff about Tom, but I could actually have an archive of all of the time I wasted…. errr… “invested” in learning how to hack a custom MySpace profile.

Of course, the biggest project has been capturing my blogs. I thought I only had a handful of Blogger sites to import to WordPress, but then I kept digging up more. WordPress sites for several grad classes.  Old conference blogs. Old work blogs. Some attempts to use Known. Even a short attempt at Tumblr. So many short blogs. So I imported all that I could into one WordPress blog archive on my own site. All of that is easy. Some of the blogs that I liked I even created html archives of the layout. The one that I am having trouble with is Instagram. I would love to import all of my Instagram posts to WordPress blog with a template like the one I set up for my artwork gallery. I found some suggestions online for how to do that, but they only import the last 20 entries. I can import the rest one by one using copy and paste if I want to, but hopefully someone will come up with a way to automate it. Any ideas?

Of course, some of these blogs were older WordPress installations on my website, while others were attached to classes like the HumanMOOC that only make sense as a complete package. But its a pain to keep over a dozen WordPress installations updated and working. So I decided it was time to archive some sites as they are as html exports and shut down the WordPress version. The problem is, I really wanted a stand alone html export that could be moved to any folder or website and still work. The most recommended WordPress html export tool that I found when I started a few years ago (WP Static) doesn’t really work well for the relative links needed to do that. I could export to a defined folder on my site and it would hard code those specific links into every page, but then I can’t move it around (the Jaiku archive I created above can work any where I put it, or even offline if needed).  WP Static does have a relative link function, but it keep messing up the number of “../”‘s you need to make links work. Half the time, it just gets lost and serves up a blank page. Even a quick search and replace on a page doesn’t fix it.

So I looked around at other options, none worked any different. Even desktop based site suckers well… they suck too much. What I mean is, if there is a link to another website on your site, it will try to suck that entire site as well! Finally, I found Simply Static. It has a relative link function as well, and it doesn’t work right out of the download either. But it only messes up in one way, and a quick find and replace on a page makes your archived page spring to life. The only problem is that because of the layers upon layers of sub directories that WordPress uses, you have to do a find and replace per page to get the correct number of “../”‘s right. So it’s a quick process on simple sites… but a longer process on more complex sites. But it works in the end. I have a standalone html archive of the HumanMOOC that I helped to co-design and co-teach that will work where ever I put it. A bonus feature is that I got to finally fix some of the things that I didn’t have time to get right in the WordPress version. The activity bank images never worked right, but now I can have an image per activity. The blog hub now has individual avatars per person so you can see who posted what. The DALMOOC, OpenEdMOOC, and Pivot MOOC should be coming soon. ish.

Then there were other random things I needed to archive. All of my Storify archives, which neatly exported to html, but are slowly dying out as people close accounts, or Twitter changes how they display pictures, or a hundred other reasons. Is it worth going through each one and grabbing what is left? Several chatbots I created are still kicking around, but also falling apart as I need to apparently update the code to not point to the dead LINK Lab website. Add that one to my massive to-do list. Even an old OLC presentation that I did “choose your own presentation topic” style with the audience.

Oh, and going way back there are a good number of html websites I designed 1999-2005 that I am still keeping around for memory sake. Most are too embarrassing to link to, but the one I like the most is the one that I mention in several bios – the website I created to help students when I was an 8th grade Science teacher: Mr. Crosslin’s Class Online. Also my first serious attempt at putting course work online.

Speaking of old sites, I have so many sites that I built in Flash that I have been trying to figure out what to do with for years. I can still open Flash on an ancient computer I have, so I have exported all of my Flash files to image and/or movie files. But some are still a bit complex for that, and even the less complex ones are no fun to watch as a movie. Is there a way to convert FLA files to HTML5? I have looked a little and didn’t like what I found. If anyone knows of a way, even if I have to pay, please let me know.

So I thought for a while that my archives of several websites I created with Flash would be limited to still images of what happened. But then I came across Ruffle. You drop a couple of files on your site, and a few lines of code on your page, and – BAM! – your Flash files start magically working. So now I can get the old U Monthly Magazine archives back online (a lot is still missing, but I will dig it out eventually). My favorite Flash website I (mostly) created is the E-SPY X-500 – a goofy attempt at an educational game that I created for a company that I worked for after teaching.  Go ahead and kick around in there – not every thing works (yet, but on the list), but see if you can find the hidden Easter eggs. You can log in with any username or password over three characters. It has been totally disconnected from the MySQL database, so no data is collected. I should point out that the cartoon characters you will see once inside were not drawn by me, but our staff artist at the time Samuel Torres.

Of course, I have also be going through and making sure that my main portfolio is up to date, because it really serves as an archive of papers, presentations, videos, artwork, and other projects as well. I have also been working on things like a games archive. All kinds of random attempts to create games are in there, including some of the ones I mentioned above (I still need to create a Twine environment for the This Picture app game idea). Oh, and somewhere in the middle of all of this, I am also trying to work with my Mom to create a tribute site to my Grandfather’s artwork, since he sold paintings and worked as a staff artist for a newspaper in a major city.

Changing over email address is quite the chore. I had to look for old accounts with two old email addresses in them, and then I had to go through 15 years of work emails to see which accounts I would want to keep after leaving (mostly access to journals I published in, review accounts, professional website accounts, and others like that). Most places were pretty straight forward. Some places were not. It took a lot of work to get control of my Flickr account. I still can’t get control of my MySpace account – does their support team still even exist? A lot of these accounts I will probably shut down. But I was surprised at how haphazard I was in using whatever email address to sign up for whatever account. At least its all back with me again. And, of course, trying to separate 20 years of joint accounts from my former marriage was a huge undertaking. Some places make it nearly impossible to do that. But then I had to go back through all of these accounts I got back or websites I created and update bio listings about family where needed.

So, even though there isn’t a light at the end of tunnel, I know that a sighting of that light should come soon. Despite all that is left, I still feel that I have cut back my online presence to a streamlined, manageable amount. Someday I will be shutting down some massive websites like this one, so I hope to find even better ways to convert WordPress to html as well. Which I guess I will… give to my son some day? Donate to a museum? Will be people even care about archives like this in a few decades? I guess I will figure that out someday…

People Don’t Like Online Proctoring. Are Institutional Admins Getting Why?

You might have noticed a recent increase in the complaints and issues being leveled against online proctoring companies. From making students feeling uncomfortable and/or violated, to data breaches and CEOs possibly sharing private conversations online, to a growing number of student and faculty/staff petitions against the tools, to lawsuits being leveled against dissenters for no good reason, the news has not been kind to the world of Big Surveillance. I hear the world’s tiniest violin playing somewhere.

It seems that the leadership at Washington State University decided to listen to concerns… uhhh… double down and defend their position to use proctoring technology during the pandemic. While there are great threads detailing different problems with the letter, I do want to focus in on a few statements specifically. Not to specifically pick on this one school, but because WSU’s response is typical of what you hear from too many Higher Ed administrations. For example, when they say…

violations of academic integrity call into question the meaningfulness of course grades

That is actually a true statement… but not in the way it was intended. The intention was to say that cheating hurts academic integrity because it messes up the grade structures, but it could also be taken to say that cheating calls into highlights the problem with the meaningfulness of grades because cheating really doesn’t affect anyone else.

Think about it: someone else cheats, and it casts doubt on the meaning of my grade if I don’t cheat? How does that work exactly? Of course, this is a nonsense statement that reals highlights how cheating doesn’t change the meaning of grades for anyone else. Its like the leaders at this institution are right there, but don’t see the forest for the trees: what exactly does a grade mean if the cheaters that get away with it don’t end up hurting anyone but themselves? Or does cheating only cause problems for non-cheaters when the cheaters get caught? How does that one work?

But let’s focus here: grades are the core problem. Yes, many people feel they are arbitrary and even meaningless. Still others say they are unfair, while some look at them as abusive. At the very least, you really should realize grades are problematic. Students can guess and get a higher grade than what they really actually know. Tests can be gamed. Questions have bias and discrimination built in too many times. And so on. Online proctoring is just an attempted fix for a problem that existed long before “online” was even an option.

But let’s see if the writers of the letter explain exactly how one person cheating harms someone else… because maybe I am missing something:

when some students violate academic integrity, it’s unfair for the rest. Not only will honest students’ hard work not be properly reflected…. Proctoring levels the playing field so that students who follow the rules are not penalized in the long run by those who don’t.

As someone that didn’t cheat in school, I am confused as to how this exactly works. I really never spent a single minute caring about other students’ cheating. You knew it happened, but it didn’t affect you, so it was their loss and not yours. In fact, you never lost anything in the short or long run from other student’s cheating. I have no clue how my hard work was not “properly reflected” by other students’ cheating.

(I would also note that this “level the playing field” means that they assume proctoring services catch all “cheaters” online, just like have instructors in the classroom on campus meant that all of the “cheaters” in those classes. But we all know that is not the case.)

I have never heard a good answer for how does supposed “penalization” works. Most of the penalization I know of from classes are systemic issues against BIPoC students that happens in ways that proctoring never deals with. You sometimes wish institutions would put as much money into fighting that as they would spying through student cameras….

But what about the specific concerns with how these services operate?

Per WSU’s contract, the recorded session is managed by an artificial intelligence “bot” and no human is on the other end at ProctorU watching the student. Only the WSU instructor can review the recorded session.

A huge portion of the concern about proctoring has been about the AI bots – which are here presented as an “it’s all okay because” solution…? Much of the real concern many have expressed is with the algorithms themselves and how they are usually found to be based on racist, sexist, and ableist norms. Additionally, the other main concern is what the instructor might see when they do review a recording of a student’s private room. No part of the letter in question addresses any of the real concerns with the bigger picture.

(It is probably also confusing to people whether or not someone is watching on the other side of the camera when there are so many complaints online from students that have had issues with human proctors, especially ones that were “insulting me by calling my skin too dark” as one complaint states.)

The response then goes on to talk about getting computers that will work with proctoring service to students that need them, or having students come in to campus for in-person proctoring if they just refuse to use the online tool. None of this addresses the concerns of AI bias, home privacy, or safety during a pandemic.

The moral of the point I am making here is this: if you are going to respond to concerns that your faculty and staff have, make sure you are responding to the actual concerns and not some imaginary set of concerns that few have expressed. There is a bigger picture as to why people are objecting to these services, which – yes – may start with feeling like they are being spied on by people and/or machines. But just saying “look – no people! (kind of)” is not really addressing the core concerns.

QM and the Politics of the “Unbiased” Bias

So it started off innocent enough, with a Tweet about concerns regarding the popular QM rubric for course review:

Different people have voiced concerns with the rubric through the years… usually not saying that it is all bad or anything, but just noting that it presents itself as a “one rubric for all classes” that actually seems to show a bias for instructor-centered courses with pre-determined content and activities. Now, this describes many good classes – don’t get me wrong. But there are other design methodologies and learning theories that produce excellent courses as well.

The responses from the QM defenders to the tweet above (and those like me that agreed with it) were about many things that no one was questioning: how it is driven by research (we know, we have done the research as well), lots of people have worked for decades on it (obviously, but we have worked in the field for decades as well; plus many really bad surveillance tools can say the same, so be careful there), QM is not meant to be used this way (even though we are going by what the rubric says), it is the institution’s fault (we know, but I will address this), people who criticize QM don’t know that much about it (I earned my Applying the QM Rubric (APPQMR) certificate on October 28, 2019 – so if don’t understand, then whose fault is that? :) ), and so on.

Now technically most of us weren’t “criticizing” QM as much as discussing it’s limitations. Rubrics are technology tools, and in educational technology are told to not look at tools as the one savior of education. We are supposed to acknowledge their uses, limitations, and problems. But every time someone wants to discuss the limitations of QM, we get met with a wall of responses bent on proving there are no limitations to QM.

The most common response is that QM does not suggest how instructors teach, what they teach, or what materials they use to teach. It is only about online course design and not teaching. True enough, but in online learning, there isn’t such a clear line between design and teaching. What you design has an effect on what is taught. In fact, many online instructors don’t even call what they do “teaching” in a traditional sense, but prefer to use words like “delivery” or “facilitate” in place of “teaching.” Others will say things like “your instructional design is your teaching.” All of those statements are problematic to some degree, but the point is that your design and teaching are intricately linked in online education.

But isn’t the whole selling point of QM the fact that it improves your course design? How do you improve the design without determining what materials work well or not so well? How do you improve a course without improving assignments, discussions, and other aspect of “what” you teach? How do you improve a course without changing structural options like alignment and objectives – the things that make up “how” you teach?

The truth is, General Standards 3 (Assessment and Measurement), 4 (Instructional Materials), and 5 (Learning Activities and Learner Interaction) of the QM rubric do affect what you teach and what materials you use. They might not tell you to “choose this specific textbook,” but they do grade any textbook, content, activity, or assessment based on certain criteria (which is often a good thing when bad materials are being used). But those three General Standards  – along with General Standard 2 (Learning Objectives (Competencies)) – also affect how you teach. Which, again, can be a good thing when bad ideas are utilized (although the lack of critical pedagogy and abolitionist education in QM still falls short of what I would consider quality for all learners). So we should recognize that QM does affect the “what” and “how” of online course design, which is the guide for the “what” and “how” of online teaching. That is the whole selling point, and it would be useless as a rubric if it didn’t help improve the course to do this.

So, yes, specific QM review standards require certain specific course structures that do dictate how the course is taught. The QM rubric is biased towards certain structures and design methodologies. If you are wanting to teach a course that works within that structure (and there are many, many courses that do), QM will be a good tool to help you with that structure. However, if you start getting into other structures of ungrading, heutagogy / self-determined learning, aboriginal pedagogy, etc, you start losing points fast.

This is has kind of been my point all along. Much of the push back against that point dives into other important (but not related to the point) issues such as accreditation, burnout, and alignment. Sometimes people got so insulting insulting that I had to leave the conversation and temporarily block and shut it out of my timeline.

QM evangelists are incredibly enthusiastic about their rubric. As an instructional designer, I am taught to question everything – even the things I like. Definitely not a good combination for conversation it seems.

But I want to go back and elaborate on the two points that I tried to stick to all along.

The first point was a response to how some implied that QM is without bias… that it is designed for all courses, and because of this, if some institutions end up using it as a template to force compliance, that is their bias and not QM’s fault. And I get it – when you create something and people misuse it (which no one is denying happens), it can be frustrating to feel like you are being blamed for other’s misuse. But I think if we take a close look at how QM is not unbiased, and how there are politics and bias built into every choice they made, we can see how that has the effect of influencing how it is misused at institutions.

QM is a system based on standardization that was created by specific choices that were chosen through bias, in a method that biases it towards instructor-centered standardized implementation by institutional systems that are known to prefer standardization.

I know that sounds like a criticism, but there are a couple of things to first point out:

  • Bias is not automatically good or bad. Some of the bias in QM I agree with on different levels. Bias is a personal opinion or organizational position, therefore choosing one option over another always brings in bias. There is no such thing as bias-free tech design.
  • The rubric in QM is Ed-Tech. All rubrics are Ed-Tech. That makes QM an organization that sells Ed-Tech, or an Ed-Tech organization.  This is not saying that Ed-Tech is their “focus” or anything like that.

Most people understand how that QM was designed to be flexible. But even those QM design choices had bias in them. All design choices have bias, politics, and context. And when the choices of an entity such as QM are packaged up and sold to an institution, they are not being sold to a blank slate with no context. The interaction of the two aspects causes very predictable results, regardless of what the intent was.

For example, the QM rubric adds up to 100 points. That was a bias point right there. Why 100? Well, we are all used to it, so it makes it easy to understand. But it also connects to a standardized system that we were mostly all a part of growing up, one that didn’t have a lot of flexibility. If we wanted to score higher, we had to conform. When people see a rubric that adds up to 100, that is what many think of first. Choosing a point total that connects with pre-existing systems that also utilize that highest score is a choice that brings in all of the bias, politics, and assumptions that are typically associated with that number elsewhere.

Also, the ideal minimum score is 85. Again, that is biased choice. Why not 82, or 91, or 77? Because 85 is the beginning of “just above average” of the usual “just above average score” (a “B”) that many are used to. Again, this connects to a standardized system we are used to, and reminds people of the scores they got in grade school and college.

In fact, even using points in general, instead of check marks or something else, was another choice that QM made that is also a biased choice. People see points and they think of how those need to add up to the highest number. This mindset affects people even when they get a good number: think of how many students get an 88 and try to find ways to bump it up to a 90. This is another systemic issue that many people equate to “follow the rules, get the most points.”

Then, when you look at how long some of the explanations of the QM standards are, again that was a choice and it had bias. But when combined with an institutional system that keeps its faculty and staff very busy, it creates a desire to move through the long, complicated system as fast as possible to just get it done. This creates people that game the system, and one of the best ways to hack a complex process that repeats itself each course is to create a template and fill it out.

While templates can be a helpful starting place for many (but not everyone), institutional tendency is to do what institutions do: turn templates into standards for all.

This is all predictable human behavior that QM really should consider when creating its rubric. I see it in my students all the time – even though I tell them that there is flexibility to be creative and do things their way, most of them still come back to mimicking the example and giving me a standardized project.

You can see it all up and down the QM rubric – each point on the rubric is a biased choice. Which is not to say that they are all bad, it’s just that they are not neutral (or even free from political choices). Just some specific examples:

  • General Standard 3.1 is based on “measure the achievement” – which is great in many classes, but there are many forms of ungrading and heutagogy and other concepts that don’t measure achievement. Some forms of grading don’t measure achievement, either.
  • General Standard 3.2 refers to a grading policy that doesn’t work in all design methodologies. In theory, you could make your grading policy about ungrading, but in reality I have heard that this approach rarely passes this standard.
  • General Standard 3.3 is based on “criteria,” which is a popular paradigm for grading, but not compatible with all of them.
  • General Standard 3.4 is hard to grade at all in self-determined learning when the students themselves have to come up with their own assessments (and yes, I did have a class like that when I was a sophomore in college – at a community college, actually). Well, I say hard – you really can’t depending on how far you dive into self-determined learning.
  • General Standard 3.5 seems like it would fit a self-determined heutagogical framework nicely… in theory. In reality, its hard to get any points here because of the reasons covered in 3.4

Again, the point being that it is harder for some approaches like heutagogy, ungrading, and connectivism to pass. If I had time and space, I would probably need to go into what all of those concepts really mean. But please keep in mind that these methods are not “design as you go” or “failing to plan.” These are all well-researched concepts that don’t always have content, assessment, activities, and objectives in a traditional sense.

Many of the problems still come back to the combination of a graded rubric being utilized by a large institutional system. A heutagogical course might pass the QM rubric with, say, an 87 – but the institution is going to look at it as a worse course than a traditional instructivist course that scores a 98. And we all know that institutional interest in the form of support, budgets, praise, awards, etc will go towards the courses that they deem as “better” – this is all a predictable outcome from choosing to use a 100 point scale.

There are many other aspects to consider, and this post is getting too long. A couple more rabbit trails to consider:

[tweet 1299002302137802753 hide_thread=’true’]

So, yes, I do realize that QM has helped in many places where resources for training and instructional designers are low. But QM is a rubric, and rubrics always fall apart the more you try to make them become a one-size-fits-all solution. Instead of trying to expand the one Higher Ed QM rubric to fit all kinds of design methodologies, I think it might be better to look at what kind of classes it doesn’t work with and move to create a part of the QM process that identifies those courses and moves them on to another system.

A Template, a Course, and an OER for an Emergency Switch to Online

So the last few weeks have been… something. Many of us found ourselves in the rush to get entire institutions online, often with incredibly limited resources to do so. I’ve been in the thick of this as well. Recently I shared some thoughts about institutions going online, as well as an emergency guide to taking a week of a class online quickly. I would like to add some more resources to the list that we have been developing since those posts.

First of all, I would like to repeat what many have said (and what I tried to emphasize in that first post): take care of your self, your family, and those around you first. Don’t expect perfection from yourself. Practice self-care as much as possible (I know that easier said that done). Then make sure to take care of your students as well. Communicate with them as much as possible, be flexible, remember that many aspects of their lives have been suddenly upended, and above all, make sure to be a voice of care in these times.

I also know that at some point, you will be expected to put your course online and teach something, whether you think it is a good idea or not. So for those that are at that stage, here are some more resources to help.

First of all, I am working with some other educators to put together a free course called Pivoting to Online Teaching: Research and Practitioner Perspectives (I didn’t really like the word “pivot,” but I was overruled). It is a course that you can take for free from edX, but for those that don’t want to register, we have been placing all of the content on an alternative website that requires no sign-up. Lessons are being created in H5P (remixable) and traditional html format. Archives of past events are also being stored here as well. We are halfway through Week 1, so plenty of time to join us.

As part of that course, I created a module template for an emergency switch to online. This is basically a series of pages that work together as a module that you can copy and modify to quickly create course content. It tends to follow many of the concepts we are promoting in the class (Community of Inquiry, ungrading, etc), but it can also easily be modified to fit other concepts as well. I basically went through my earlier post “An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)” and followed it in making a Geology module. Then I add some notes in red to talk about options and things you should think about if you are new to this. You can find the Canvas or IMS Common Cartridge version in the Canvas Commons that can be imported in Canvas, or downloaded and imported to systems that support IMS. However, since there are also other systems that don’t use either of these formats, I also made a Google Docs version as well as a Microsoft Word version for download.

And finally, the OER – our book Creating Online Learning Experiences:A Brief Guide to Online Courses, from Small and Private to Massive and Open is still available through Mavs OpenPress in Pressbooks (with Hypothes.is enabled for comments as well). I want to highlight a few of the chapters:

Of course, I like the whole book, so it was hard to pick just a few chapters, but those are the ones that would probably help those getting online quickly. When you get more caught up, I would also suggest the Basic Philosophies chapter as one to help guide you think through many underlying aspects to teaching online.

An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)

With all of the concern the past few weeks about getting courses online, many people are collecting or creating resources for how to get courses online in case of a last minute emergency switch to online teaching. Some are debating whether to call it “emergency remote teaching” (or some variation of that) instead of actual “online teaching.” I agree with the difference, but don’t think that the academic definitions of either one really brings about much change in the practical work of getting online.

There are many problematic issues to address that many are not talking about. Accessibility, student support, and social support structures that schools provide don’t always switch online so well. Some students are even being kicked off of campus, with little mention being made of finding out where they will go if they don’t have a place to go this early, if they can afford to get where they need to go if so, and if the environment they go to can even allow for them learning from online. On top of all of that, few are talking about the difficulty and chaos that going online will create.

Of course, a lot can be said about if closing schools or going online instead of canceling is a good idea. All are good questions to ask. But a lot of people out there have already been forced to go online whether they agree or not, and many more will be forced to do so in the coming weeks. So we have to talk practical steps for those that are in this predicament.

There are many okay to good guides out there for switching online. I see most of them will tell faculty to examine their syllabus to see what can and can’t go online. This is a good first step, but it often ends up being the last step mentioned in this process. There needs to be some quick and blunt guides for what that means to examine your syllabus. So I am going to dive into that here.

Most Instructional Designers will be able to put a week’s worth of a class online in a very short amount of time IF given free reign to apply effective practices focused on the bare minimum needed and a complete set of content based on those principles. Once IDs start getting away from that – adding in time consuming online options that faculty love but that are not absolutely necessary, or waiting for faulty to get them content – the time to create a class increases quickly. However, if you are willing to focus on the bare bones of good online course design, there are many things you can learn from IDs in a pinch.

As I go through this, I will be addressing accessibility issues along the way. The main thing to keep in mind is that media (mainly video, but also images) is easy to make accessible (due to built in alt tags and captioning features), but also the most time consuming. Auto captioning usually doesn’t cut it. You will still need to manually read through and correct any mistakes by hand. The more you can get away from relying on video or video services, the less time it will take to prepare course work (in general).

The first step in going online is to talk with your students about what that would mean before you are forced to make the switch. Talk with them about what it takes to learn online. Have them go through your syllabus and brainstorm ideas for how to transition your objectives to online. Give them the freedom to suggest changes to objectives, or to even think of different activities to meet objectives. Ask them to talk to you privately if they don’t have Internet access at home, of they need other support services. Make sure they all have a way to check in with you (just so you know they are okay), and a back-up method or two in case the main communication method is not working well (or goes down).

If you have already been forced online, or there will be no class meetings between now and when the switch happens, you will need to think through this yourself. Of course, thinking through this yourself might help you guide the discussion with your students – so do it either way.

Content Creation

The first thing to ask yourself is how new information/content/etc will be communicated to students:

  1. One-Way Communication: Typical lecture method, where you share the new information that students learn. Easiest communication to make accessible, but captions could still be time consuming if you are relying on video (especially longer ones). If your goal is one way communication, you don’t need synchronous video tools, even for questions (students can contact you for that, email them, comment if you use a blog, etc). Also, note that this type of communication can be made to work on mobile devices easier.
  2. Discussion Between Instructor and Students: If you really want the ability to interact, and not just answer questions, you do need to look at tools for interaction. Discussion forums are the most accessible (but a little less human), while video conferencing tools are more problematic in regards to accessibility. For example, people with various hearing issues report that Zoom’s accessibility tools start to fall way short of ideal once four people get on a session. So if you really need this, you might want to consider using small group structures that can use a variety of tools (even a phone). Which would bleed over into another communication modality:
  3. Students Communicating With Each Other: Yes, this would include small group discussion. However, also consider how you can encourage students to use your class as a support network. Don’t just lock-down class tools to only be used for class activities. Help students get that human connection they will start to miss once social distancing sets in. This communication modality can be very effective for mobile devices and accessibility needs if you can be flexible about tools and structures.

Here is the thing that will save you the most time: If it were up to me for a class I was teaching, I would not try to schedule meetings for online lectures or even record videos of my lectures to get those online. It is possible to do that well when going online, but time consuming and problematic in regards to accessibility. Even typing out my lecture for the week can take a while. I would go straight to the Internet:

  • With so much out there, you can probably find articles, blogs, websites, etc that contain the content you need in a 15-30 minute search (or less if you already know of some sources).
  • Then use this link to see how to set up a really fast accessibility check tool in your browser. Use this tool on each source you find.
  • Be careful of video sources – make sure they have accurate captions.
  • Then take another 15-30 minutes to create a content page or blog post that lists each source and adds any core concepts you couldn’t find. This will be the most accessible form of content to make, as well as mobile friendly, as long as the services you use are accessible and mobile friendly.
  • This is a great way to get a wider array of perspectives on course topics than a textbook usually provides. But check to make sure you have diverse perspectives – if your list relies heavily on white Western heterosexual cis males, then you will need to change the parameters of your search to be more inclusive.
  • If needed, you can print articles out on paper for those without Internet access (and just hope there is a way to get it to them still functioning).

Obviously, if you have a textbook that you base your lectures on, they you already have a source of content that you can write up some notes on. This won’t necessarily be the most diverse perspective, but it will be quick. Just be sure to think through issues like students that couldn’t afford the textbook (ahem – OER?), if they can access the eBook texts at home, and so on.

Even more advanced quick method: Turn to some student-centered design methodologies to make the course more engaging:

  • Spend the 15-30 minutes creating an activity for your students to go find the content for the week (online, at a library, etc).
  • Towards the end of the week, create a page or blog post collecting those sources with your commentary.
  • Put some time into creating something more than just “go find content!”
  • Think through how to address accessibility issues, as well as how students that don’t have Internet will find content. Be flexible on that last one – not every student can just go to the local library when they want (and what if libraries close?).

Be ready to have to use the mail service for some students if you have to, and don’t worry too much about deadlines if so.

If you really need to use video, you will need to have well-edited captions, This can take a while. There are really three main options you have:

  • Option 1: Record your video, upload for auto-captioning, and then edit the captions for errors. Not all video upload services allow all of this, so check before time. This will probably sound the most natural, but you will also probably be surprised at how much time you waste going “ummm…” or “let me start over….”
  • Option 2: Type out your content and read from the page, without worrying about the way it makes you look “stiff.” You don’t usually write the way you talk, so it will just have to come out that way in crunch. But you will probably focus without too many side tracks. If you keep the video length to about 2-3 minutes, you can probably write and record it in 30 minutes, depending on how fast you write.
  • Option 3: Record your video, upload to YouTube or something else that has auto captioning, download the auto captions, edit for mistakes (not style), and re-record the video reading this script. The fact that it was based on your natural speech will make it sound more natural. Plus you had a built in practice run. A better end result (not perfect), but also more time consuming.

Activity Creation

This part of the course could possibly be the easiest part to create, or the hardest part, depending on your topic. If you have extensive lab requirements that can’t or aren’t simulated online, you will need to get together as a department and figure out how to translate that into the online space. Unfortunately, there is no quick way around that.

Also, keep in mind that you don’t have to come up with a project or test every week. Sometimes projects take more than one week. Sometimes it is good to just relax and learn. Plus, tests are a problematic concept to begin with, and proctoring solutions will be hard to implement when a lot of people start staying home in the same house. So as much as you can move away from high stakes big tests, the better the online experience will be for you and your learners.

Here is the thing that will save you the most time: One thing that is very cliche but effective is to use your LMS test tool to create a low stakes understanding check:

  • 5-10 questions that covers the core things students should have learned for the week.
  • Give students unlimited attempts so that learners can take them over as needed to get all the questions right.
  • This is not the coolest online design method, but it does give students some relief to know they are on the right track.
  • It should only take 20-30 minutes to create 5-10 questions… if your focus is on making sure students have had some contact with core concepts, and not on trick questions or “gotcha!” fake answers.
  • The goal with this kind of activity is not to catch cheaters, but to help students know what you think is important.

Even more advanced quick method: Really what I would focus on is creating authentic / experiential / etc projects that allows learners to engage more deeply with what they are learning:

  • Think of something that would allow learners to apply the course content to their real lives.
  • Think of something that would also let them apply course knowledge to a real world situation.
  • Let students think of how they will communicate what they have learned. Don’t limit them to just what you think they should produce (like a paper).
  • If you are spending more than 15-20 minutes writing out the instructions to this, you are over-thinking it.
  • If you spend less than 5 minutes writing instructions, you are probably not giving students that are possibly new to this level of agency enough guidelines on how to do the project. Remember the students that are new to all of this.
  • Provide 3-4 example ideas of how students could complete the assignment. Don’t worry too much of several students use your idea. Think of some outside the box ideas, like skits, graphics, etc.
  • Remember flexibility and accessibility. If you have to accept a hand-written project sent through the mail – or maybe even one transcribed by a sibling or a spouse – then no problem. Just be glad they are learning.
  • Its best to grade these in more of a general way. Don’t get bogged down in exact point totals for every mistake. Consider ungrading if possible in your institutional structure.
  • For larger classes, have students self-organize into groups based on interests and/or desired artifact to create. Don’t forget that students might need help self-organizing online or at a distance. Again, remember those with accessibility issues, or internet access issues.

At this point, I would have spent about an hour creating my class for the week. This would have been 30 minutes researching and writing a blog post containing the week’s content (FYI – this blog post is waaaay too long for that, so don’t follow my example :) ), and 30 minutes creating a student-centered authentic open-ended project. And this project would take students 3-4 weeks to complete. If you are new to creating classes this way, your first time or two at doing this might take longer (especially if you are new to the tools you will use). Also, if you need other things like video or labs, that will take more time that this. But there is something else to plan for as well.

Course Communications

I have been touching on communication issues through out this post, so I will try not to repeat what I have already stated here. You will need to communicate other things like class norms and online etiquette outside of content and activities. While it may seem like I am against synchronous communication methods here, that is not the case. You can use both. You will just need to consider how to use synchronous tools in ways that addresses accessibility and internet access issues, as well as make sure the tools works well on mobile devices (which is hard to do for all, since a lot of that is personal preference).

Like I mentioned in a previous post, scheduling synchronous sessions can be tricky at best in shutdown / quarantine situations. Yes, you can do the “we will record it and you can watch later if you miss” thing, but that is also problematic. Some of the reasons that cause people to miss – like overwhelmed internet service – will prevent the same people from watching the recorded video. They will also feel a bit left out.

But here is what I would do for communication in a sudden switch to online:

  • Talk to students before hand if possible. Create a plan.
  • Use synchronous tools for open communal office hours.
  • Set-up alternate options for communication – phone, email, even mail if needed.
  • Send out an email, text, mass phone call, or something weekly. This is a good way to humanize your online learning – stick with those principles as much as possible.
  • Don’t totally avoid video if you don’t want to. Sometimes, a quick 1-2 minute “welcome to the remote version of class” can help. Just get the captions right first!
  • Students might be new to learning online. If your school has a Code of Conduct for online interactions, make sure to follow that and make your students aware of it. If your school doesn’t have one, or has one that you feel is not adequate enough, consider creating your own Code of Conduct for your class.

Every time I edit this post, I add a bunch of new sentences all over the place. There is a lot more that could be said, but I will stop here. I hope that this post gives faculty the idea that they can focus their classes on what works in online learning, not just re-creating the face to face class. Also, I hope this empowers you to save some time in the process, without sacrificing effective practices in the name of an emergency. If you have access to an Instructional Designer to help, please talk to that person even if you have read this post. They can give you even more specific advice related to your unique course needs than this post can.

Instructure Wars, Private Equity Concerns, and The Anatomy of Monetization of Data

The Annals of the Dark and Dreadful Instructure Wars of 2019

as told by

Matt, the Great FUD Warrior, Breaker of Keyboards, Smacker of Thine Own Head, Asker of Questions That Should Never Be Asked

If you were lucky, you were spared the heartache that came out of nowhere over the announcement that Instructure will possibly be sold to Thoma Bravo, a private equity firm.

Well, it should be stated that the concern from those that always have concerns over these sales announcements was expected. The quick “shush shush – nothing to worry about here” that came in response from people that usually understand the concerns over past Private Equity sales in Ed-Tech (Blackboard being the typical example) was the surprising part.

For the record, the first skirmish actually started when Jon Becker asked what possible outcomes there could be of the sale, Audrey Watters responded with her thoughts on that, and someone made a sexist attack on Audrey’s knowledge of private equity. Also for the record, they initially did not disagree with her points that prices would go up or that Instructure would be broken up and sold off. They stated this was impossible because there is nothing about Instructure that could be broken into parts. Many of us pointed out the sexists problems with the way he expressed his opinion (not his underlying opinion about PE), but he dug his heels in. We also pointed out that there were actually many things within Instructure that could be broken apart, but that was apparently grounds for fighting (even though it is true there are many parts of Instructure that could be broken off and sold, and just because Thoma Bravo has a history of buy-and-build strategy, there is nothing stopping them from still selling off parts they don’t fit their strategy. “Buy-and-build strategy” and “selling off parts” are not mutually exclusive). Within that argument, the idea that the all of the data that Instructure has been bragging about for a few years could either be sold or monetized (more on the important difference there later).

Sometime within the next day, Jesse Stommel made the tweet that really kicked off the main war (I don’t know if he was replying to comments about the value of data in the earlier arguments or it was a coincidence). This is going to be a long post, so I am trying not to embed Tweets here like I usually do. In what was an obvious reference to Instructure bragging that their data was core to their value as a company, Jesse made the comment that we can now know that this value they were bragging about has a price tag of $2 Billion dollars.

Now, can I just say here – I don’t think in any way that Jesse thought that “Instructure data actually cost $2 Billion.” I’m pretty certain he knows that personnel, assets, code, customer payments, etc all are part of that value. Its just that there was a lot of bragging about data being core to the company value, and a huge gap between the market value at the time) and $2 Billion dollars, and that his professional analysis was that data contributed to that in a big way.

Then there was some debate over the value of data in an Ed-Tech company. This was followed by some shooshing and tone policing towards anyone that thought there should be concern over the lack of transparency about data that Instructure has become known for recently, as well as concerns over what could change with new owners. This led to people retreating to their own corners to express their side without having to be interrupted with constant tangential arguments (and there is nothing wrong with this retreating).

Audrey Watters has written her account of the ordeal, which I recommend reading in its entirety first. I am tempted to quote the whole thing here, so really go read it. I’ll wait.

Okay, first I want to clarify something. In my mind, there is a difference between “selling data” and “monetizing data” even though there are obvious overlaps:

  • “Selling Data” is taking a specific set of data (like from a SQL data dump) and selling that to companies that will turn around and sell it to others (which does happen with educational data – more on that later). When someone says “what good is someone knowing that I submitted Quiz 2 back in 2016?”, they are referring to data as a set archive of database rows from a set date. It is kind of looking at data as a crop of apples that were harvested at a specific time. There was concern over this as a possibility, and we will look at that later.
  • “Monetizing Data” is any form of making money directly from creating, manipulating, transporting, etc data. This happens a lot in every day life, and not all instances of it are bad. The core business of most for-profit LMS companies is the monetization of data – nothing in an LMS works without data. Grade Books need data to work. Discussion forums are empty without data. Analytics dashboards show nothing without data. This is kind of looking at data like a the fruits of a field of apple trees that are constantly growing once picked. You could wipe an LMS database of all past data (well, assuming you could find a way to do so without shutting down functionality), but as soon as you turn it on it the code starts generating a massive set of new data. For many, the main concern with the monetization of data is who controls the data and what will they use it for in the LMS? Will I get manipulated by my own data being compared to past learner’s data without either of us knowing about it?

Now, to be fair, most responses were fairly nuanced between the two “sides” of the war. For the record, my “side” in the great Instructure War of 2019 is that “data has the potential to be used in ways that users may not want, which could include monetization, and both Instructure and their potential buyer are not saying enough about what their plans are.” I think that is close to what many others thought as well, but our position was mainly reduced to “all data bad!” While the other side was reduced to “data has no value so stop worrying!”, I do want to examine the idea of whether educational data can have value (to be sold or monetized) outside of either side.

Instructure’s View of Their Own Data

First, I think it is prudent to start with Instructure’s own view on their data. While it would be hard to reference they amount of bragging they have been doing about the value of data at conferences and sales calls, we only have to look at their own words on their own website to see how they view data.

First there is Project Dig. They start off by proclaiming that they have become “passionate about leveraging the growing Canvas data to further improve teaching and learning.” That passion “became our priority, and over the years we’ve provided greater access to more data and designed new, easy-to-understand (and act on) course analytics.” How can a priority of the company not be a huge factor in what they are worth? This is all under the banner of “we’ve been focused on delivering technology that makes teaching and learning easier for everyone.” Obviously, as an LMS, that focus is their main revenue maker as well. And now data is the priority for that focus.

FYI – the target launch date for their tools that will “identify and engage at-risk students, improve online instruction, and measure the impact of teaching with technology.” is…. 2020. Conveniently after the proposed sales date it seems. Again, how could this priority focus of the company that is improving what they offer to customers not be a huge factor in the current sales price?

But they do recognize that there are some problems with digging into data. What word gets mentioned A LOT in the FAQs about potential issues (hint: it involves a word combined with data that starts with “s” and rhymes with “felling”).

Some key highlights from the FAQs:

  • “Will your practices be consistent with your data privacy or security policies?”They say they “are not selling or sharing institutions’ data” – but only because they choose not to. It important to note that the question about selling is there because they feel they can do just that if they want. But they assure us they won’t. Of course, new owners can change that.
  • “Is this really just my data, monetized?”Basically, they say it is not an example of monetizing your data just because… they choose not to, not because they can’t. The implication still remains that the possibility is real and it is there. Then give examples of how they could. Again, new owners are not limited by this choice of the old owners.
  • “What can I say to people at my institution who are asking for an “opt-out” for use of their data?”This is the core problem many have with monetization of data: feel free to do it, just give me the option to opt out at least. They say a lot, but don’t really answer that question (which is very concerning).Important to note that they say “Institutions who have access to data about individuals are responsible to not misuse, sell, or lose the data.” Then they say they “hold themselves to that same standard.” Nothing says this couldn’t change with new owners. But how do you “sell” data that is worthless? They seem to think selling data is as possible as misusing it or losing it. It certainly would be a lot easier to say “no one is out there buying or selling educational value.”
  • (While they do say a lot about openness and transparency, many customers have expressed frustration at some lack in those areas.)

(an important side note in support of Watters point earlier is that, in addition to the main products of the company that each could be broken off and sold or things like assets, employees, etc, these data projects and services represent even more parts that could easily be broken off and sold if a PE firm so chooses to at any point)

Then I give you – Canvas Data. The doc for this service is really a whole page of ideas for how to monetize Canvas data, along with the existing tools to do it. Which is really the goal of the project: “customers can combine their Canvas Data with data from other trusted institutions.” What it doesn’t quite clarify here is that these institution include companies that sometimes charge money to create manipulate, and transport student data inside and outside of Canvas. Many people trust Canvas to vet these companies, but sometimes these arrangements are obscure.

I will give one example from an organization that I think is pretty trustworthy – H5P. H5P does integrate for free with Canvas for free. However, some activities designed in H5P generate grades (which is student data). If you want to transport that data back into the Canvas grade book, you need a paid account with H5P. This is just one example of how a company can monetize Canvas student data, even down to one small data point.

Now, while I see no reason to distrust H5P, I can’t force students to trust an organization they don’t know. What if they didn’t want grade generated on all these websites (because H5P is not the only company to do this)? What if they were not comfortable with a company profiting on moving around their grades? Or what if they were concerned about what the change in LMS ownership meant for all of this?

Anyways, all of this information is up on their website because Instructure believes that their data is very valuable, and that it can be sold. Why would they not point out that data is worthless by itself? Why would they talk about all of this if people weren’t asking these questions, if entities weren’t asking to buy data?

(And Instructure is not the only LMS doing this. Even Blackboard’s Ultra is already ready to do more with data, to monetize it today: “We’re not just handing you data. We’re surfacing data that matters when it matters most— to foster more personalized interactions and drive student success.”  In other words, they are not just doing what other LMS have always done and managed (handled) your data to monetize it, they are adding value by surfacing and doing more. They “drive success,” because success sells. If you have ever been to an LMS sales pitch, you know analytics, personalization, success, all these terms used on the pages I shared are key sales terms to convince organizations to sign the contract.)

The Marketplace for Student Data

One of the contentions of the “LMS data has little to no value” side is that no one is buying or selling student data, as in dumps of past records. It seems that there are existing marketplaces for student data, to the point that someone wrote a journal article on the whole thing: Transparency and the Marketplace for Student Data. “The study uncovered and documents an overall lack of transparency in the student information commercial marketplace and an absence of law to protect student information.” Sounds like a pretty good justification for concern over any student data out there, whether currently under consideration for sale or not. Why it that?

Taking the list of student data brokers Fordham CLIP was able to identify, Fordham CLIP sought to determine what data about students these brokers offer for sale and how they package student data in the commercial marketplace. There are numerous student lists and selects for sale for purposes wholly unrelated to education or military service. Also, in addition to basic student information like name, birth date, and zip code, data brokers advertise questionable lists of students, and debatable selects within student lists, profiling students on the basis of ethnicity, religion, economic factors, and even gawkiness.

That is not all.

Under the Radar Data Brokers

Get ready for this one: there is no evidence that educational data is even staying specifically within a dedicated student data marketplace. This article on under the radar data brokers compiled a list of “121 data brokers operating in the U.S. It’s a rare, rough glimpse into a bustling economy that operates largely in the shadows, and often with few rules.” Most of the entries on the list don’t get into the specific data they collect, so the fact that “education” appears three times on the list for the few that do is concerning:

  • BLACKBAUD INC.
    A “supplier of software and services specifically designed for nonprofit organizations. Its products focus on fundraising, website management, CRM, analytics, financial management, ticketing, and education administration.” (Wikipedia)
  • MCH INC. DBA MCH STRATEGIC DATA
    MCH “provides the highest quality education, healthcare, government, and church data.”
  • RUF STRATEGIC SOLUTIONS
    A marketing firm owned by consumer identity management company Infutor with a focus on travel, tourism, insurance, e-commerce, and education.

These are places already in business, already buying and selling data. If you look at the chart of the attributes (types of data) that Acxiom collects, “education” is one. Its not hard to believe that – if they don’t already have it – they would be very interested to add “I completed quiz 2 on such-and-such date” to that massive collection on each person.

So Why $2 Billion for Instructure?

The only concrete answer we have now is “nobody outside those privy to the details knows.” There are many speculations out there – some of which started the Instructure Wars. One of the main ones I haven’t touched on, that probably summarizes one side of the Instructure Wars, is that the data adds little to nothing to the value of Instructure (despite their own claims to the contrary), but it is “simple math” getting from the current market value to $2 Billion.

I think there is a point to be made in the “simple math” argument (although I would be careful calling it “simple” or claiming “people just don’t understand” if they don’t agree). I would say that even basic Math has to account for the value of data (both at the price it can be sold and the value it can add through monetization). Autumm Caines made the comparison that data is the engine to the LMS car, and you don’t really buy one without the other. In fact, those that are claiming that the data have no value are accusing Instructure of being the shadiest used car sales people in the world: “If you will buy this new fancy car, I will throw in the engine for free!”

However, it seems that different Instructure investors are now disagreeing with the $2 Billion price tag, some thinking it is too low. In fact, they think it “significantly” under values the company. I would assume these investors have access to details about the price of the sale, and if the $2 Billion was simple math, I don’t know if there would be much room to disagree. The cost of the code would be tied to revenue it generates, and therefore would be static and easy to calculate. Various aspects like assets, personnel, and the value of the income from investors are all relatively fixed. Even the future revenue is based on various predictive factors that would be hard to argue.

Seeing that the data is the newest priority of the company, and its value is difficult to calculate, might that be the best candidate for the source of this disagreement? Maybe, but the only for really, really complex data that leads to complex calculations that could easily be off. And LMS data is pretty straight forward… right?

Well, not so much. Kin Lane took a look at what the public APIs of Canvas reveal about the underlying data, and its a dozy. That is another article that I could quote the entire thing, so please take time to read it. I know the page looks long, but that is because he lists 1666 data points (!!!) in just the public APIs alone (while pointing out there are many private ones that probably have many more). He also points out how this structure and the value that it brings easily accounts for the $2 Billion price tag and more, especially when combined with the costs of code and people and customers and so on.

Now, of course, I am willing to bet that there are multiple factors that are causing the investors to fight over price. It could be that they think Canvas’ movement into the corporate training space is about to take off. It is probably a combination of many factors, some I have not even touched on here.

But it is just not far-fetched to think that there is a real possibility that the data is driving the prices, either a commodity to be sold, and/or a service to be monetized.

As I finished this post, Jame Luke published a blog post based on the economics side of the issue. While it does expose that many of us (especially me) are using the wrong terms, the basic idea that the PE will not have education’s best interest at heart and that the data is driving the economics here. He touches on some of the reasons why about data I do here, goes into a lot more depth, and shares several plausible scenarios for the future goals of Thoma Bravo, including one that makes the monetization of data very central to the future sales value of the company. A detailed but necessary read as well. I don’t have time or energy to go back and correct what I got wrong based on this post, so feel free to blast me in the comments if you so desire.

The Ferocity of the Battle

Right now, my direct messages on Twitter are booming with multiple people that are all flabbergasted as to why this is so controversial. We get that there would be disagreement, but the level of ferocity that one side has had in this battle is surprising. Especially since we all thought these were people that agreed that data does bring significant monetary value to a company.

“Data is the new oil!” we were told. But now we are told it was only… black paint all along? Something that will be there in every painting, but doesn’t cost much to be there?

Many people have offered thoughts on why some are some determined to fight this fight. If they get shared publicly I will probably come back and add them here. For my part, I just don’t know. People that want to protect student’s from misuse of data I get, but they have really gone to extra levels of fight over this (beyond what they usually do, that is). The real surprise is the shear irritation from the Learning Analytics community. I know – how did that happen? None of this Instructure kerfuffle says anything bad or good about Learning Analytics, yet they are in the thick of the battle at times.

Still, why is the basic message of “we should be vigilant to make sure that a company that has been a bit opaque with data issues recently gets sold to another company that may or may not be more open, because their data has the possibility to be exploited” so controversial right now? Why must so many people be proven right on the exact price of data? I don’t know.

For me, there could be a news report tomorrow that has Instructure stating “yep, the price was all about the data,” and I would just respond with “okay, thought so.” I get the feeling I am going to be buried in a barrage of snide Tweets if the opposite narrative goes in the news.

Which, let’s be honest, that will be the narrative from Canvas. They have to say there isn’t much value to the data no matter what the truth is. If they let on that it has actual, real value, every single school, teacher, and student will immediately want to sue for their share. Even if there is no lawsuit, the public relations nightmare would cause untold damage as people get mad their data had direct value in the massive sale.

Of course, the reality is that it does not matter what comes out. Canvas already bragged about the value of the data they are monetizing. They already are using it in ways that people don’t want. People have a good reason to be upset about the monetization of their data because it is already happening.

Thus ends the accounting of the never-ending Instructure Wars, as best can be summarized near the end of the dread year 2019.

As the wars drag on and alliances are strained, many began to wonder….

Will this ever end….

So You Want to Go Online: OPMs vs In-House Development

As the Great OPM Controversy continues to rage, a lot is being said about developing online courses “in-house” (by hiring people to do the work rather than paying a company to do so). This is actually an area that I have a lot of experience in at various levels, so I wanted to address the pros and cons of developing in-house capacity for offering online programs. I have been out of the the direct instructional design business for a few years, so I will be a bit rusty here and there. Please feel free to comment if I miss anything or leave out something important. However, I still want to take a rough stab at a ballpark list of what needs consideration. First, I want to start with three given points:

  1. Everything I say here is assuming high-quality online courses, not just PowerPoints and Lecture Capture plopped online. But on the other hand, this is also assuming there won’t be any extra expenses like learning games or chat-bots or other expensive toys… errr… tools.
  2. In most OPM models, universities and colleges still have to supply the teachers, so that cost won’t be dealt with here, either. But make sure you are accounting for teacher pay (hopefully full time teachers more than adjuncts, and not just adding extra courses to faculty with already over-full loads).
  3. All of these issues I discuss are within the mindset of “scaling” the programs eventually to some degree or another, but I will get to the problems with scale later.

So the first thing to address is infrastructure, and I know there are a wide range of capacities here. Most universities and colleges have IT staff and support staff for things like email and campus computers. If you have that, you can hopefully build off of that. If you don’t…. well, the OPM model might be the better route for you as you are so far behind that you have to catch up with society, not just online learning. But I know most places are not in this boat. Some even already have technology and support in place for online courses – so you can just skip this part and talk directly with those people about their ability to support another program.

You also have to think about the support of technology, usually the LMS and possibly other software. If you have this in place, check to make sure the existing tools have capacity to take on more (they usually have some). If you have an IT department – talk with them about what it would take to add an LMS and any other tools (like data analysis tools) you would like to add. If you are talking one online program, you probably don’t need even one full time position to support what you need initially. That means you can make this a win/win for IT by helping them get that extra position for the ____ they have been wanting for a while if they can also share that position with online learning technology support part-time.

This is, of course, for a self-hosted LMS. All of the LMS providers out there will offer to host for you, and even provide support. It does cost, but shop around and realize there are vendors that will give you good service for a good price. But there are also some that won’t deal with you at all if you are not bringing a large numbers of courses online initially, so be careful there.

Then there is support for students and teachers. Again, this is something you can bundle from most LMS providers, or contract individually from various companies. If you already have student and faculty tech support of some kind on campus, talk with them to see what it would take to support __ number of new students in __ number of new online courses. They will have to increase staff, but since they often train and employ student workers to answer the calls/emails, this is also a win/win for your campus to get more money to more students. Assuming your campus fairly treats and pays its student workers, of course. If not, make sure to fix that ASAP. But keep in mind that this can be done for the cost of hiring a few more workers to handle increased capacity and then paying to train everyone in support to take online learning calls.

Then there will be the cost of the technology itself. Typically, this is the LMS cost plus other tools and plug-ins you might want to add in (data analytics, plagiarism detection, etc). Personally, I would say to avoid most of those bells and whistles at the beginning. Some of them – like plagiarism detection – are surveillance minded and send the wrong message to learners. Hire some quality instructional designers (I’ll get to that in a minute) and you won’t even need to use these tools. Others like data analytics might be of use down the line, but you might also find some of the things they do underwhelming for the price. With the LMS itself, note that there are other options like Domain of One’s Own that can replace the LMS with a wider range of options for different teachers and students (and they work with single sign on as well). There are also free open-source LMS if you want to self host. Then there are less expensive and more expensive LMS providers. Some that will allow you to have a small contract for a small program with the option to scale, others that want a huge commitment up front. Look around and remember: if it sounds like you are being asked to pay too much, you probably are.

So a lot of what I have discussed is going to vary in cost dramatically, depending on your needs and current capacity. However, if you remain focused on just what you need, and maybe sharing part of certain salaries with other departments to get part of those people’s time, and are also smart about scaling (more on that later), you are still looking at a cost that is in the tens of thousands range for what I have touched on so far. If you hit the $100k point, you are either a) over-paying for something, b) way behind the curve on some aspect, or c) deciding to go for some bells and whistles (which is fine if you need them or have people at your institution that want them – they usually cost extra with OPMs as well).

The next cost that almost anyone that wants to go online will need to pay for no matter what you do is course development. Many people think they can just get the instructors to do this – but just remember that the course will only be as good as their ability/experience in delivering online courses. You may find a few instructors that are great at it, but most at your school probably won’t fall into that category. I don’t say that as a bad thing in this context per se – most instructors don’t get trained in online course design, and even if they do, it is often specific to their field and not the general instructional design field. You will need people to make the course, which is where OPMs usually come in – but also in-house instructional designers as well.

With an average of 6-8 months lead time with a productive instructor, a quality instructional designer can complete 2-3 three quality 15 week online courses per semester. I know this for a fact, because as an instructional designer I typically completed 9 or so courses per year. And some IDs would consider that “slow.” More intense courses that are less ready to transition to online could take longer. But you can also break out of the 15 week course mindset when going online as well – just food for thought. If you are starting up a 10 course online program, you would probably want three instructional designers, with varying specialties. Why three IDs if just one could handle all ten courses in two years easily? Because there is a lot more to consider.

Once you start one online program, other programs will most likely follow suit fairly quickly. It almost always happens that way. So go ahead and get a couple more programs in the pipeline to get going once the IDs are ready. But you also need to build up and maintain infrastructure once you get those classes going. How do you fix design problems in the course? When do you revise general non-emergency issues? What about when you change instructors? And who trains all of these instructors on their specific course design? What about random one-off courses that want to go online outside of a program? Who handles course quality and accreditation? And so on. Quality, experienced instructional designers can handle all of these and more, even while designing courses. Especially if you get one that is a learning engineer or that at least specializes in learning engineering, because these infrastructural questions are part of their specialty.

The salary and benefits range of an instructional designer is between 50K-100K a year depending on experience and the cost of living where you are located. These are also positions that can work remotely if you are open to that – but you will want at least one on campus so they can talk to your students for feedback on the courses they are designing. But remote work is something to keep in mind because you also have to consider the cost of finding an office and getting computers and equipment for each new person you want to hire (either as IDs or the other positions described). Also don’t forget about the cost of benefits like health care, which is pretty standard for full-time IDs.

Another aspect to keep in mind is accreditation – that will take time and people, but that will be the case even if you go with an OPM as well. You will need to pull in people from across the campus that have experience with this, of course – but you will also have to find people that can handle this aspect regardless of what model you choose. And it can be a dozy, just FYI.

Another aspect to consider is advertising. This is a factor that will always cost, unless you are focused solely on transitioning an existing on campus program into an online one (and not planning on adding the online option to the on-campus one). But even then, if you want it to scale – you will need to advertise. Universities aren’t always the best at this. If yours is, then skip ahead. If not, you will need to find someone that can advertise your new program. Typically, this is where OPMs tend to shine. But it is also getting harder and harder to find those that will just let you pay for advertising separate from the entire OPM package.

I can’t really say what you need to spend here – but I will say to be realistic. Cap your initial courses at manageable amounts – not just for your instructors, but also for your support staff. I can’t emphasize enough that it is better to start off small and then scale up rather than open the floodgates from the beginning. Every course that I have seen that opens up the first offerings to massive numbers of students from the beginning has also experienced massive learner trauma. Don’t let companies or colleges gloss over those as “bumps in the road.” Those were actual people that were actually hurt by being that bump that got rolled over. Hurt that could have been avoided if you started small and scaled up at a manageable pace.

So while we are here, let’s talk scale. Scale is messy, no matter how you do it. Even going from one on-campus course to two on-campus courses has traditionally led to problems. All colleges have wanted to increase enrollments as much as possible since the beginning of academia, so its not like OPMs were the first to talk or try scale. However, we need to be real with ourselves about scale and the issues it can cause.

First of all, not all programs can scale. Nursing programs scale immensely because the demand for nurses is still massive. Also, nurses work their tails off, so Nursing instructors often personally take care of many problems of scale that some business models cause. I’m still not sure if the OPMs involved in those programs have even realized that is true yet. But not all programs can scale like a Nursing program can. Not all fields have the demand like Nursing does. Not all fields have the people with the mindset like Nurses have (no offense hopefully, but many of you know its true and its okay – I’m not sure if Nurses ever sleep).

All that to say – if you are not in Nursing, don’t expect to scale like Nursing can. Its okay. Just be realistic about. Also, be honest about any problems that are happening. Glossing over problems will only cause more problems in no time. Always have your foot on the brake, ready to stop the scaling before issues spiral out of hand.

Remember: education is a human endeavor, and people don’t react well to being herded like cattle. I feel like I have only touched the surface and left out so much, but I am as tired of typing as you probably are of reading. Hopefully this is giving some food for thought for the people that have been wondering about in-house program development.

So why go in-house development rather than OPM? Well, I have been making the case for the cost-saving benefits plus capacity-building benefits as well. Recently I read about an OPM that wanted to charge $600,000 to build one 10 course program. All that I have outlined here plus stuff I left out would easily half of that for a high-quality program. And I am one of those people that usually advocates for how expensive online courses can be to do right. But even I am thinking “Whoa!’ at $600K.

Look, if you are wanting to build a program in a field like Nursing that can realistically scale, and you want to deal with thousands of students being pushed through a program (along with all the massive problems that will bring), then you are probably one of five schools in the nation that fit that description and OPMs are probably the best bet for you. For the other 3000-4000+ institutions in the nation, here are some other factors to consider:

  • Hiring people usually means some or all of those people will live in your community, thus supporting local economies better.
  • Local people means people on your campus that can interact with your students and get their input and perspective.
  • Having your people do things also typically means more opportunities to hire students as GTAs, GRAs, assistants, etc – giving them real world skills and money for college.
  • When your academics and your GRAs are part of something, they usually research it and publish on it. The impact on the global knowledge arena could be massive, especially if you publish with OER models.
  • Despite what some say, HigherEd is constantly evolving. Not as fast as some would like, but it is happening. When the next shift happens, you will have the people on staff already to pivot to that change. If not, that will be another expensive contract to sign with the next OPM.

The last point I can’t emphasize enough. When the MOOC craze took off, my current campus turned to some of its experienced IDs – myself and my colleague Justin – to put a large number of MOOCs online. Now that analytics and AI are becoming more of a thing in education (again), they are turning to us and other IDs and people with Ed-Tech backgrounds on campus as well. For people that went the OPM route, these would all be more (usually expensive) contracts to buy. For our campus, it means turning to the people they are already paying. I don’t know what else to say to you if that doesn’t speak for itself.

Also, keep in mind that those who are not in academia don’t always understand the unique things that happen there. Recently I saw a group of people on Twitter upset about a college senior that couldn’t graduate because the one course they needed wasn’t offered that semester. The responses to this scenario are those that many in academia are used to hearing: “bet there is a simple software fix for this!” “what a money making scam!” “if only they cared to treat the student like a customer, they wouldn’t make this happen!” The implication is that the problem was on the University’s side for not caring about course scheduling enough to make graduation possible. Most people in academia are rolling their eyes at this – it is literally impossible for schools to get programs accredited if they don’t prove that they have created a pathway for learners to graduate on time. It makes good business sense that not all courses can be offered every semester, just like many business do not sell all products year round (especially restaurants). Plus, most teachers will tell you it is better to have 10 students in a course once a year than 2-3 students every semester – more interaction, more energy, etc. But schools literally have to map out a pathway for these variable offerings to work in order to just get the okay for the courses in the first place. Those of us in academia know this, but it seems that, based on what I saw on Twitter recently, many in the OPM space do not know this. We also know that there is always that handful of students that ignore the course offering schedules posted online, the advice of their advisers, and the warnings of their instructors because they think they can get the world to bend to their desires. I remember in the 90s telling two classmates they wouldn’t graduate on time if they weren’t in a certain class with me. They scoffed, but it turns out they in fact did not graduate on time. So something to keep in mind – outside perspectives and criticism can be helpful, but they can also completely misunderstand where the problems actually lie.

And look, I get it – there will always be institutions that prefer to get a “program in a box” for one fee no matter how large it is. If that is you, then more power to you. There are a few things I would ask if you go the OPM route: first of all, please find a way to be honest and open about the pros and cons of working with your OPM. They may not like it, but a lot of the backlash that OPMs are currently facing comes from people just not buying the “everything is awesome” line so many are pushing. The education world needs to know your successes as well as your failures. Failure is not a bad thing if you grow from it. Second, please keep in mind that while the “in-house” option looks expensive and complicated, going the OPM route will also be expensive and complicated. They can’t choose your options for you, so all the meetings I discuss here will also happen within an OPM model, just with difference people at the table. So don’t get an inflated ego thinking you are saving time or money going that route. Building a company is much different from building a degree program, so don’t buy into the logic that they are saving you start-up funds. They had to pay for a lot of things as a for-profit company that HigherEd institutions never have to pay for.

Finally, though, I will point out how you can also still sign contracts with various vendors for various parts of your process while still developing in-house, like many institutions have for decades. This is not always an all-or-nothing, either/or situation (see the response from Matthew Rascoff here for a good perspective on that, as well as Jonathan D. Becker’s response at the same link as a good case for in-house development). There are many companies in the OPM space that offer quality a la carte type services for a good price, like iDesign and Instructional Connections. Like I have said on Twitter, I would call those OPS (Online Program Support) more than OPM. Its just that this term won’t catch on. I have also heard the term OPE for Online Program Enablers, which probably works better.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Modernizing Websites: html5, Indieweb, and More?

On and off for the past few weeks I have been looking into modernizing some of my websites with things like html5 and indieweb. The main goal of this experimentation was to improve the LINK Research Lab web presence by getting some WebMention magic going on our website. The bonus is that I experiment with some of these on my own website before moving them onto a real website for the LINK Lab. I had to make sure they didn’t blow things up, after all.

However, the problem with that is my website was running on a cobbled together WordPress theme that was barely holding together, and looking dated. I was looking for a nice theme to switch over to quickly, but not having much success. Then I remembered that Alan Levine had a sweet looking html5 theme (wp-dimension). One weekend I gave it whirl, and I think we have a winner.

The great thing about Cog Dog’s theme is that it has a simple initial interface for those that want to take a quick look at my work, but also has the ability to allow people to dig deeper into any topic they want to. I had to download and delete all of the blog posts that were already on my website, as the theme turns blog posts into the quick look posts on the front page. Those old posts were just feedwordpress copies of every post I wrote on this blog – so no need to worry about that. Overall, a great theme that is easy to use that I highly recommend for anyone wanting to create a professional website fast.

Much of my current desire to update websites came from reading Stephen Downes’ post on OwnYourGram – a service that let’s you export your Instagram files to your own website. To be honest, the IndieWeb part on the OwnHourGram website was just not working for me, until I found the actual IndieWeb plugins for WordPress. When in doubt, look for the plugins that already exist. I added those, and it all finally worked great. I found that the posts it imported didn’t work that well with many WordPress themes (Instagram posts don’t have titles, but many WordPress themes ignore posts without titles – or renders them strangely on the front page). So I still need to tinker with that.

The main part I became the most interested in was how IndieWeb features like WebMentions can help you connect with conversations about your content on other websites (and also social media). That will probably be the most interesting feature that I want to start using on this website and the LINK Lab website as well. So now that I have it figured out, time to get it set up before it all changes :) I’m just digging into this after being a fan from a far for a while, so let’s see what else is out there.