What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.

Digital Out of Body Experiences

Ever get crazy ideas about the future of technology? I was pondering some of the new technology that different groups/companies are working on, and had a crazy thought about the future of computer interfaces. It all started with thinking about computers in the 1990s. For those that remember the 1990s, there was something magical about the computers that were coming out then. It was like the displays suddenly leapt well beyond what we saw even in Science Fiction movies. I mean, you could get a decent desktop computer that looked fancier than anything on Star Wars or Star Trek, and they could play CDs, store files (remember having to save everything on a disc? How quaint), connect to other people, play rough videos, etc. You didn’t see a whole lot of that in the movies.

Today we have people working on amazing stuff. Sensors that follow your moves well enough to let you play video games. Using WiFi to see through walls. Immersive heads-up displays. We see some cool stuff in movies today, but I wonder if reality will actually move beyond our current Sci-Fi paradigms of future interfaces into something totally different.

As we increase the ability to quickly detect and map the immediate world around us through cameras, sensors, WiFi signals, sounds, etc – we will soon have the ability to create a photo-realistic digital 3-D recreation in real time. Which sounds cool in itself for, say, recording important events and then re-living them later. Throw those recordings into an immersive Occulus Rift-like helmet and its like you are back there again. But what if you had the helmet on while recording? Since your sensors probably extend a good 100 feet or more, you could realistically “pull away” and rotate the display from your body the same way you spread your fingers across a map on a smart phone to pull out. What this means is that we will probably see the ability to have realistic digital out of body experiences in our life time.

Sounds creepy, but also think of the safety implications. What if you drove this way, and since you can pull back and see around corners, you get in less accidents? You could even start driving your car like a video game with a video game joystick. Same could go for fighter pilots in battle – think of the advantage you could have to see the whole battlefield like a realistic game. Also, imagine public safety – the ability to look through a building for a bomb threat without stepping a foot inside, for instance.

Of course, there are huge privacy concerns with this idea. Would we have to invent a new paint and window films that can block these technologies in order to secure not only government buildings but our own houses? I am sure some solution will present itself.

Of course, we don’t always have to go big. Doctors could use this technology to guide miniature robots all over the human body, or even perform routine work on contagious patients from a safe distance.

Of course, I have been talking about co-located events here, but since we are talking about transferring digital information to a display, that display could technically be anywhere in the world and this “out of body” experience could be transferred over the Internet. Educationally, think of the ways we could change teaching if we could send learners anywhere we want with little physical danger. Historical sites could set up tours online – just create a protocol for streaming your sensors online and people could go all over the place in the middle of class. And not just international trips that are cost-prohibitive in real life – also think trips that are dangerous like inside a volcano or hurricane or to the bottom of the ocean.

Of course, all of this is kind of akin to floating around someplace like a digital ghost that no one can see – which is good in some situations, but not others. But what if we can combine these sensors with holographic projectors to project the virtual visitor as if they were actually there? Collaboration pretty much reaches the level of holodecks. What will that mean for classes when we have this ability? What could we learn about ourselves if we have the ability to re-watch ourselves later from an outsider’s perspective? For all of the fields that involve interaction, what would that mean to be able to replay a whole interaction? What would this mean for role play?

Its kind of creepy and interesting at the same time. But then again, back in the 90s, the idea of sharing personal pictures and personal random thoughts on Facebook was creepy and interesting also. We will see where all of this goes, but I hope people that are working on these technologies are dreaming big enough to work through the creepy and into the interesting.

Make Your Brain Happy by Learning Something Online

All I can say is that I knew it all along. Jacqueline Barnes of Litmos LMS says that “our brains love learning online.” Or I guess it would be more accurate to say that research is possibly indicating that certain aspects of the online experience help us to enjoy the learning process a bit more.

A closer look at the research shows that it is not really just anything and everything about online learning that help us learn better, but specific concepts and ideas focused primarily on engagement, social presence, encouragement, and immediacy. What I don’t see in the research is any mention of long lecture capture videos, digitizing standardized tests and uploading them online, 500-slide death by PowerPoint modules, or any of the other standards that we typically see in online courses.

In other words, the bad, boring teaching concepts that have been bad, boring teaching concepts for centuries will continue to be bad, boring teaching concepts no matter how much technology we wrap around them. [ahem…. iBooks 2?]

So many times when I read about certain colleges putting “free courses” online I cringe – when all they are really doing is putting popular lecture captures online. I have tried to watch these free videos and no matter how well spoken or humorous the professor is, I just can’t sit there and watch to the end without my attention wandering.

What these recent studies don’t necessarily say directly – but they still possibly prove – is that our brains are happy when we are actively engaged in the learning process. Passively sitting there and staring at the screen for a long time? Not so much. I hate to admit it, but that is why I have never been able to get into the Khan Academy that much. If you love it – great. I just need more engagement and less “sit and stare.”

Transparent Screens Means That “The future is now-ish!”

There seems to be a lot of attention given recently to transparent displays – basically, monitors that you can (some what) see-through.  Despite what CSI:Miami would have use believe, they aren’t here yet – but close.  Seems that LG is in the lead with 47″ 1080 HD touch screens that you can see through.  The prediction is that these screens will take augmented reality to a new level.  This quote from an article on ExtremeTech.com caught my eye:

In ten years, we may no longer have cell phones in our pockets, they will be built into our glasses and perhaps even contact lenses.

Sounds familiar to me :) But it looks like we are going to see augmented reality sooner rather than later – so I am going to start saving up for the iPhone 10 now….

A Brave New World Free of PowerPoints

TxDLA was a great event this year. Harriet and I did our usual rebel-rousing there (along with other EduGeeks such as Katrina, Darren, and Shaun. Yes, they are still alive). Creating a session PowerPoint is usually difficult for us, since we usually don’t prepare any preset material.  We like to discuss, interact, and have some interesting conversations. But since most educators have to have something to look at, we usually put up a PowerPoint with pretty pictures (here is our old set of purty pics).

This year, Harriet created a Prezi presentation.  Prezi is pretty cool in that it can be very non-linear.  You can click and scroll around on the presentation as you like. This gives me hope for a future of conference presentations that are free of PowerPoint overkill.  Here is what I am thinking:

Someday, someone will come up with an iPad competitor that doesn’t have all of Steve Job’s weird hang-ups about Flash.  Prezi is built in Flash, so this is key. Oh, and it will run a real operating system instead of iPhoneOS.  Then they will create a cheap adapter that hooks this superior iPad product to projectors. Then the fun will begin.

Image if you could just create a map of all the concepts you want to discuss in a presentations in Prezi.  Then use this better iPad model to run the presentation.  Using the touch screen, you can scroll around and zoom in on concepts as they come up in the discussion. Non-linear, interactive presentations, controlled by a light, portable touchscreen pad.  That would make any session much more active and connected.

Also consider how this could change your classes. Or maybe this already exists and I am just not buying the right products?

Anyways, here is the Prezi from our TxDLA session (which is still linear – we didn’t want to blow too many gaskets in one session):

Next time I hope to go in to some thoughts about some of the discussions and feedback we had at the conference – it was some great stuff.

The Future of 3-D In Education

THE Journal released an interview last week with Chris Chinnock (board member of the 3D@Home Consortium) about the future of 3-D in Education.  If you haven’t read the article, then go read it – there is some interesting information in there.  But I have a few thoughts that were left out.

What about computer graphics/modeling and virtual worlds in 3-D?  Chinnock discusses the need for content in 3-D.  Why not give students the ability to create content/images/etc?  Will the programs to do this be too expensive for schools to utilize in individual classes like they do with some programs such asMicroSoft Word?

These questions (and more) will all probably be asked and answered in the near future, I am sure.  Not to mention that the future of 3-D  is not just about projectors.  There are also advances being made in holographic displays and three-dimensional monitors.

But this is all leading to the fact that the walls between the real world and virtual reality are slowly crumbling away.  We now have the ability to create a virtual reality room.  Surround sound and cameras that can follow your movements already exist.  Combine that with the projectors that Chinnock discusses, pointing in all directions in a room, connected to a super-fast computer that can feed realistic CG to those projectors based on your movements, and you pretty much have the early version of a Star Trek holodeck.  Imagine what Second Life would be like then?

Is Augmented Reality Here?

I’m not sure why I am so interested in augmented reality.  I guess it seems more practical and immanent that virtual reality.  Maybe I was really, really scared by The Matrix and I don’t want to be enslaved by the machines.  Maybe it has been a slow week in EdTech news.  After all – is FaceBook buying FriendFeed all that big of a shock?

But the thought of a way to have your portable computing device actually interact not only with the World Wide Web, but also the actual world around you just seems so.. incredible.  But how close are we to making augmented reality a true reality in our lives?

Vuzix is working on a product that might just do that: the Wrap 920AV.   The 920AV is a pair of sunglasses that have a see-thru screen that allows you to watch a display from your iPhone and still see the world around you through the same display.  It even mentions on the page that the glasses are designed for augmented reality. Click on the accessories tab for some other cool options, like a motion sensor that tracks your head movements, and cameras that attach to the top of the glasses.

This is the kind of thing I was thinking of when I first blogged on iPhones and augmented reality.  Just think:  with these glasses you don’t have to have to hold the iPhone like in the video from that post – the metro map is just displayed on your glasses in front of you.  What if you mix this with Sixth Sense technology?  What if you just saw your iPhone display floating in front of you, and moved your hands to interact with the apps (kind of like a portable version of the computers in Minority Report)?

Talk about true mobile learning.  You know those audio tours you can rent at tour spots like Alcatrazz – the ones that talk you through the attractions?  Those can now make the jump from audio to visual – adding historical re-enactments, or showing you what lies behind the walls that they don’t want to tear down.

I’m sure this kind of thing will be expensive at first, and I really hope they don’t hype just watching movies on these things.  What I hope is that Apple catches a vision for this and makes it look really cool so that everyone will want it.

eBooks Are Getting Kinda Hot These Days

Maybe it was just me, but it seems like a large number of updates and emails I got this week were about eBooks.  They’ve been around forever it seems without really ever catching on.  For a while, it seemed like it was one of those cool SciFi ideas that looked great in the movies, but people realized that it wasn’t so cool in real life.  I guess that is not the case – developments are happening in many areas… some of which I am not really sure of.

First of all, there is the Amazon Kindle.  Or it’s the Kindle 2 I guess.  Interesting features.  But the lack of color really seems to be a significant minus in my department.  I know it is new technology and all that, and they will probably figure that out some day.  So, really – significant but not huge.  The biggest drawback is the one that always seems to plague the technology world: it’s a separate device.

Yep – yet another device for you to find a storage space for.  It can’t do everything that your laptop, smart phone, digital camera, or even old school beeper can… so you will have to find yet another spot on your belt to hang it from.  Belt holders are quickly becoming the new pocket protectors.

So – maybe you don’t always need it with you, but think about those vacations now…. finding a place for your laptop, digital camera, smart phone, video camera, mp3 player, and now… eBook reader.  The apparent overlap of functionality there is mind boggling.

Or you could just read the eBook on your laptop… or iPod… or smart phone.  But that would be too simple, huh?

Maybe I could see the use of an eBook reader if you marketed them to schools as a replacement for the stacks of books that students are carrying around every day.  Well, at least, the books they should be carrying around.  Of course, for the price of a Kindle you can swing a decent smart phone that reads PDFs and does a whole lot of other things that would be useful.

Another head scratcher is the Follet Digital Reader.  It is not a product, but a whole new format designed to replace PDF files.  Basically, it looks like Follet didn’t like that some minor features were missing from PDF, so they decided to re-create the wheel… only with slightly different hubcaps.  Some nice, but ultimately unnecessary, differences.  Or so it seems.  You have to to have .Net 2.0 on your computer to download and try it out.  I avoid .Net 2.0 like the plague… and I don’t have admin privileges to install it in the first place.  This new reader could actually be the coolest thing since sliced bread, but I’ll never get to know.  That is a severely limiting factor in my opinion. You may never see this thing make the jump to mobile devices because of that.  Or even to a Kindle 2 for that matter.

Oh, and by the way – if you are already a Follet customer – you seem to have no choice in this change. That is what customers today love so much – lack of choice.

Of course, Gmail was sounded pretty redundant until people started using and realizing ‘hey, this really is better than anything out there right now.’  Maybe the same will happen with the Follet Digital Reader.

I guess you really can’t blame people for trying something new.  But… why? Especially when there are already good options already out there? My problem with so much of this is that so many companies just re-create overlapping technology.  Why not just create a PDF reader with the capabilities that you want, that works on multiple platforms?  Oh, wait – people have already done that and didn’t make that much money from it….

If you don’t want to learn from history….

Microsoft’s Multisurface Sphere

(Found this on TechCruch today…) As Microsoft works on making every available surface a computer screen, a recent demo of Microsoft’s Surface Sphere has, well, surfaced.

Could this be technology’s version of the crystal ball? This makes me start imagining 3D objects rendered inside the sphere that we can zoom in on, manipulate, and view from different angles. Anyway, fun to think about.

PS — Matt, I’d like one of these so I can have it set as a globe to put on my desk. There should be enough in our EGJ budget, y’think?

PSS – Just found a related article at Ars Technica.

Best Google Earth Interface Videos

The Google Earth Blog posted yesterday a bunch of video clips of different ways to interface with Google Earth. Thought this was just too interesting for just a twitter/jaiku post. Enjoy!

Best Google Earth Interface Videos