Metaversity, Tri-Brid Models, and the Same-Ol, Same-Ol “Innovation”

“Is education moving to a “tri-brid” model that flows between in-person, online and simulated environments?” asks the article titled With Money From Facebook, 10 Colleges Turn Their Campuses into ‘Metaversities’ on EdSurge. Of course, the answer is “no” – but you have to wonder how much time and attention will be given to this idea before it fades into the GoogleWave is the future of Education graveyard?

To be honest, I actually enjoy VR – I have fun playing the occasional game in there, or “hanging out” with family in other cities. I think the ability to create simulations makes VR an interesting educational tool. Interactive tools? Meh – don’t hate them, don’t love them. But to move towards making an entire educational experience or course or campus inside of VR sounds like we are going the wrong direction away from the “Sage on the Stage” model. Sitting and watching an instructor in a class room is replaced with sitting there with the classroom strapped to your face (and collecting all kinds of data on decisions you make without even leaving the room). “Only better” as I guess the people involved with this Enagage-supported project would claim.

It was the whole claim of “Tri-Brid” that first caught my attention:

“Arés argues that the rise of VR technologies will shift the current hybrid model of education—which draws on separate in-person and online environments—into a ‘tri-brid’ model, one that moves ‘seamlessly between online, in-person and simulated, without the limits of time, travel and scale.'”

First of – how do you move “seamlessly” between in-person and simulated when in-person is limited by time, travel, and scale? They point out that learners without VR headsets can use a monitor – so let’s be real here. The “simulated” portion is really just another version of Zoom. This is not really a Tri-Brid but an “Extended Hybrid” model. Look at the picture at the top of the article – it is still a Sage-on-the-Stage, still a white professor lecturing an all-white class, the same ol’, same ol’… with a cool 3-D model added to the PowerPoint.

Sounds a lot like Second Life, right? “Not so fast!” the creators say:

“This may bring to mind the now-defunct digital campuses that universities set up 15 years ago using Second Life—but leaders of the new project are quick to claim that this will be way better.”

Well, okay – I have used Occulus, and in many ways it does work better than Second Life. But when people talk about Second Life in this context, they aren’t talking about graphic quality, or interface design, or any of that. Of course we expect those aspects to improve over time. The problem with Second Life classrooms was that for every one of them that were actually interesting, there were dozens more that we just sub-par equivalents to a video conference. It took lots of time and money to create decent scenarios, and half the time the novelty of those wore off and people went back to more traditional online modes.

In other words – for every “look at how the heart functions by going inside of one” simulation that was out there, there were so so so many “here is my classroom camera feed streaming to a screen in a vague virtual re-creation of my classroom.”

There are other foundational problems with the idea as well. Monica Arés, head of Immersive Learning at Meta, had this to say about VR:

Arés, the leader of Immersive Learning at Meta, is a former teacher. She recalls the nagging worry that her lessons might not hold her students’ attention. “I would spend countless hours trying to create lessons that are visually rich,” Arés says. “I knew the second I put that headset on it was the medium I had been looking for.”

Any Instructional Designer and most teachers will tell you that there are all kinds of ways to get student attention beyond just finding “visually rich” lessons. When your foundational idea of what makes learning effective is so skewed like this… I have to worry about the overall project.

And the problems don’t just stop there. David Whelan, founder and CEO of Engage, had this to say:

“Computers started in homes as entertainment, then creeped into school, then into everyday use items and at jobs,” Whelan says. “VR could take the same route.”

This is not true at all. Most people that I know first interacted with computers at schools and Universities long before they had one at home. Read any book on the 80s. Many people were even working on computers at a job before getting one at home. They were really, really expensive at the beginning. Many of us remember arranging our schedules around light traffic time at school/university computer labs so that we could find a free machine.

Computers started at businesses and universities. Its a pretty easy historical fact to look up.

Maybe Whelan grew up in a wealthy home that could afford a home computer early. Maybe he is not old enough to even know where computers started. I honestly don’t know anything about him. Either way – it makes you question the ability of his company to really know what is going on and where the tech world is going.

Then there is the question of whether or not students will actually like being in VR in the first place. The article makes this claim:

“Trying to pay attention in a college course while manipulating an avatar around a virtual classroom can feel a little odd. But the new Stanford study suggests that this kind of setting gets more comfortable for students over time.”

However, I noticed that the article does not go into how comfortable students were overall, or how much their comfort actually increased over time.

Turns out, the answer to both is more like “not very much.”

If you look at the original article in question, the results aren’t that impressive. When students were asked to rate “enjoyment” on a Likert scale of 1 to 5, the results hovered between 3.1 and 3.7. In some ways of crunching the data, it did slightly increase over time – but not by much:

Self-presence and spatial presence (feeling like you are really in the environment) both hovered around 4 on a Likert scale of 1 to 7. Social presence and entitativity (“the degree to which a collection of people is perceived as a single, unified entity”) fared a little better: between 5 and 6 on a 1 to 7 Likert scale. But how much of that is attributed to all of the work online courses have put into increasing those aspects online for decades?

Overall, a more honest reading of the outcomes of the study is that, on average, the reaction to comfort in VR was “meh.” Sure – all factors increased over time, but how much of that increase came from the learners being more aware of those factors because they were asked about them ever week? And is it really that surprising to say that people get more used to using something the more they use it? That doesn’t mean they really like it in the first place.

But you might also say “if they want to waste money on a rabbit hole, what’s the big deal? Its not like they have their sights set on anything bigger.” Well….

“I do hope that things like the socioeconomic divide and geography divide can potentially be bridged in education because of some of these new technologies like VR,” (Greg Heiberger, associate dean of academics and student success at South Dakota State University) says. “Those would be the two tenets I would guess are near the top of their (Meta Immersive Learning) list: making money and giving some of those resources back to make the world a better place.”

Not sure how they plan to eliminate redlining, food distribution, prejudices, and all kinds of other societal problems that drive these divides… through VR (and other tech)? A statement like this kind of feels like the “tossing a roll of paper towels” moment of this whole idea. If there is one thing we have learned about the world, is that you can always count on the rich to give their wealth to the poor and not some huge vanity project purchase.

But obviously, Heiberger needs to talk to Arés about all of this “trickle-down” wealth:

“Arés said that Meta is not focused on earning revenue from the partnership; instead, the ‘main goal is to increase access to education and transform the way we learn.'”

Transform the way we learn – by sticking a white dude avatar in front of a 3-D PowerPoint Screen and then strapping this transformed classroom to students face so they can virtually sit in desks from the comfort of their own homes. Even though those homes might not be “comfortable” for all, and you can only wear an Occulus so long before your face starts hurting. Funny how research studies never examine how deep the red marks in the shape of a headset are on users’ face at the end of these immersive learning sessions.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Thoughts from a former Second Life advocate

(In response to Matt’s previous post re: the Second Life educational discount…) Actually, the educational discount was pretty good if you consider the amount of space you get on an island and all you can fit there – education, advertisements, meeting spaces, etc.

Where the expense really comes in and caused many institutions to balk is development — people quickly realized that building/programming in SL was not easy by any means for most people without a computer science degree. You’d end up either farming out the development to emerging technologies groups on your campus or paying big bucks to put something up. (Or you’ll find some geeky instructional designer who quickly falls in love with it and dumps hundreds of hours into developing in SL.) If you don’t have either of these and you’re using SL for education, you have to invest time in researching areas and finding places that will help achieve your objective.

Yes, I admit — I was a big-time SL advocate in the beginning. I’ve since been able to step back and realize just how much work and exactly how realistic it is (isn’t) to invest time/money in this project. SL had tons of potential, especially in education … it just isn’t practical.

I’m wondering what this SL alternative is that was mentioned in the article. (I’ve been away from SL and virtual worlds for so long; I apologize if there’s an obvious answer.) I think even with this alternative, the excitement over virtual worlds will decrease dramatically. My reasoning is this — sure, you have an open source alternative. But chances are (and Matt, please correct me if I’m wrong) you’ll have to self-host, meaning you’ll have to find hardware to put it on and people to maintain it. I know this is almost sounding cliché but with budgets being slashed as drastically as they are this year and projected for next, most places are just not going to be able to justify the expense. I suspect many schools were already seriously looking at their SL property to be included in the cutbacks we’re all facing, and LL’s announcement just made their decision a lot easier.

RIP SL

Star Trek Forgot To Mention That The Holodeck Was Invented By Google

Okay, so I know that there have been many people working on holodeck-like inventions for quite a while.  But none have been quite as cool as Google’s Liquid Galaxies, and I don’t remember hearing about any of the previous attempts being released as open-source.  Yes – Google released their immersive environment tool as open source.  You can read more about it here:

http://googleblog.blogspot.com/2010/09/galaxy-of-your-own.html

Of course, it is the design and software that is open-source, not the actual hardware itself.  But it is an interesting start, nonetheless.  Two things in the article gave me some ideas:

  • You can hook up any where from two to “dozens” of screens potentially.
  • You can add other virtual interfaces to the set-up. In other words, it is not just limited to Google Earth.

I wonder how long it will be before someone figures out a way to use Second Life with this?  Anyways, here is my idea: First of all, you get a few dozen flat screen panels with little or not frame (kind of like they do in Sports Bars with nine screens showing four games) and put them together in a sphere shape with the screens facing inward.  Probably with a few in the back on a hinge acting as a door in.  Then you get an omnidirectional treadmill for a floor hooked up to the software in place of a joystick.  Finally, add a few motion detection cameras at key points around the sphere and a wireless microphone.  Maybe even add a glove interface of some kind for more detailed controls.  Wire all of this to work together with virtual environment of your choice (Google Earth, Second Life, World of Warcraft, you name it) – and I think we have our first rudimentary holodeck.  Maybe even someday use 3-D flat screens.

Probably pretty expensive to buy all this.  Probably also a little tough to figure out how to get all those systems to work together.  But I am sure it can be done.  So who has a grant to try this out?