Building a Self-Mapped Learning Pathways Micro-Lesson: H5P vs Twine

One of the issues that I often bemoan in relation to creating Self-Mapped Learning Pathways lessons is how there really isn’t simple technology that will let you quickly build non-linear, interactive, open-ended content. I have been keeping my eye on H5P, and building a few things with Twine or SAP Chatbots, so I decided to take them all out for a spin in trying to build something that allows for learners to build their own learning pathway.

So how did it turn out? In general, there were some interesting affordances of the tools, but they still don’t get me to where I would like to be with the lesson design. And none of them really did much for the open-ended part. But I did create some OERs that you can use if you like (details at the end). First, some of the process.

SAP Chatbots have some pretty robust tools for creating interactive chats. In theory, I think I could have built everything within a bot, but didn’t get around to it this time because it would have taken some deep dives. I’m also not convinced that a chatbot interface is the way to go, but more about that later. I decided to use a chatbot at a specific point in the learning pathways lesson to help learners think through modality options.

With H5P, I used the Branching Scenario and Course Presentation tools mainly. With H5P, you get a more intuitive interface that looks nice (and we are told is completely accessible), but very little options for customizing anything. I couldn’t change the look, program variables, or embed things like the SAP Chatbot anywhere into the lesson. So I came up with a way to get around that. It seems to be a good basic option for those that don’t want to get into the weeds of programming variables, but it still is mainly a way to create a Choose Your Own Adventure book. Which is what some call “personalized” these days, even though its really not.

With Twine, there were many options to customize, add variables, manipulate code, and embed what you want. I am not sure how accessible everything in Twine is, but it does give you a lot more flexibility for customization. Also, the option to set variables means you can let learners choose some options that would reformat what they see based on their selections. I did a little bit of that, but I need to dig into this some more. Since I could embed more things in Twine, I was able to build the entire lesson from beginning to end in Twine (with a chatbot embedded near the beginning, and an H5P assessment embedded at the end of one modality).

So I ended up with two different versions of the same lesson that will allow you to compare the two options. Before I share those, a few thoughts on building the lesson.

It took a long time to think through the options and build the simple choices that I did below. A lot of this could be attributed to the fact that I was building an entire lesson from scratch. I decided to dig some into Goals, Objectives, and Competencies because so many of my students struggle with these concepts. Someone that already has a complete lesson built would probably save a lot of time on that front.

Also, I will say that I ran out of time to re-record the videos. There are some mistakes and poorly chosen words here and there (like me saying “behaviorist” when I mean “behavioral”). Maybe I will fix that in the future.

Ultimately, it a took a lot of time to build the options and think through how to navigate the options, while also trying to find ways to get people who choose to take their own path to the tools they need. This is the open ended part I still struggle with. It really comes down to this: learners will step out on their own into the garden, or not. I can’t do much to pre-program those options into a system. I could be there in person to discuss their pathways if they needed it, but that is hard to pre-design for. You would spend hours creating the ability for each option, and then maybe have one or two people choose it.

I should point point out that this lesson uses a modification of the course metaphor idea that asks learners to choose between the “sidewalk” or the “garden” (or to mix both if they like). The metaphor is based on the botanical garden concept, where sidewalks guide those on pre-determined paths to show the highlights of the garden, while the gardens themselves can be explored as you like by leaving the sidewalk. The sidewalk represents the instructor-centered pathway, while the garden represents the student-centered, heutagogical option.

What I don’t like is the modular way all of these parts feel. I wish there was a way to combine all of the elements so that learners only see one page that re-loads new content based on their input. In other words, instead of a chatbot that tries to mimic human conversation (which some like, but others don’t), why not have a conversational interface that would ask questions and then supply new content, videos, activities, etc based on the learner input?

Plus, chatbots tend to be cloud-based, meaning everything you put in them is stored on someone else’s computer. Why can’t that be a local tool that protects your privacy better?

Anyways, these lesson are some basic ideas of what a self-mapped learning pathways micro-lesson could look like. I still feel there is more that could be done with the garden pathway in using the coding/variables option in Twine. I also utilized some tools like Hypothes.is and Wakelet in the garden modality (just because I like them), but I need to ponder more about how those tools can be utilized as a mapping space themselves.

So here is what I have:

Goals, Lesson, and Competencies Self-Mapped Learning Pathways micro-lesson in Twine

or

Goals, Lesson, and Competencies Self-Mapped Learning Pathways micro-lesson in H5P

The H5P tool does use plain html pages for the first three pages – you will see when the switch happens. Also, the Twine tool still uses some H5P activities for the sidewalk modality assessments at the end of that modality. Since this is a stand-alone lesson, I needed some kind of assessment option and decided to re-use what I had created already.

A few design notes: The Sidewalk modality is designed so that there is always a main option to choose from for those that need the most guidance, but also links to other options for those that want to skip around. My goal is to always encourage non-linear thinking and learner choice in small or large ways whenever possible. In the Twine version of the lesson, if you choose the Sidewalk option, that is what you see. If you go to the Sidewalk + Garden option, then there is code that inserts links back to the Garden section into the Sidewalk. This is some of the customization I would like to explore more in the future.  Also, the Garden and Sidewalk + Garden option have some examples and ideas for learners to choose from (basically, custom links to Twitter, Wikipedia, etc to show specific evolving searches there). This obviously isn’t much, but it is a self-determined option and therefore I didn’t want to offer too much. But maybe its not enough?

But, this is a full micro-lesson, and I am designating it as an OER with a CC Attribution-NonCommercial-ShareAlike 4.0 International license for those that want to use it:

  • The videos are on YouTube if you just want to use those.
  • I have created a zip file with all of the html files that you can download and edit. The Twine file in that zip archive (“goals-objectives-competencies.html”) can be loaded into Twine itself and edited as you need.
  • You can also download and update the two H5P files by going either to the full lesson or the assessment portion and clicking on the “Reuse” link in the bottom left corner.
  • The chatbot itself can even be forked and customized by creating an account with SAP and using the fork function on the main page for the bot.

I may even create a badge for those that complete the lesson – who knows? If you want to send a few people through the lesson, feel free to do so with the links above. If you want to send a lot of people through it, maybe consider hosting it on your server. :)

What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.