Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Thoughts from a former Second Life advocate

(In response to Matt’s previous post re: the Second Life educational discount…) Actually, the educational discount was pretty good if you consider the amount of space you get on an island and all you can fit there – education, advertisements, meeting spaces, etc.

Where the expense really comes in and caused many institutions to balk is development — people quickly realized that building/programming in SL was not easy by any means for most people without a computer science degree. You’d end up either farming out the development to emerging technologies groups on your campus or paying big bucks to put something up. (Or you’ll find some geeky instructional designer who quickly falls in love with it and dumps hundreds of hours into developing in SL.) If you don’t have either of these and you’re using SL for education, you have to invest time in researching areas and finding places that will help achieve your objective.

Yes, I admit — I was a big-time SL advocate in the beginning. I’ve since been able to step back and realize just how much work and exactly how realistic it is (isn’t) to invest time/money in this project. SL had tons of potential, especially in education … it just isn’t practical.

I’m wondering what this SL alternative is that was mentioned in the article. (I’ve been away from SL and virtual worlds for so long; I apologize if there’s an obvious answer.) I think even with this alternative, the excitement over virtual worlds will decrease dramatically. My reasoning is this — sure, you have an open source alternative. But chances are (and Matt, please correct me if I’m wrong) you’ll have to self-host, meaning you’ll have to find hardware to put it on and people to maintain it. I know this is almost sounding cliché but with budgets being slashed as drastically as they are this year and projected for next, most places are just not going to be able to justify the expense. I suspect many schools were already seriously looking at their SL property to be included in the cutbacks we’re all facing, and LL’s announcement just made their decision a lot easier.

RIP SL

Star Trek Forgot To Mention That The Holodeck Was Invented By Google

Okay, so I know that there have been many people working on holodeck-like inventions for quite a while.  But none have been quite as cool as Google’s Liquid Galaxies, and I don’t remember hearing about any of the previous attempts being released as open-source.  Yes – Google released their immersive environment tool as open source.  You can read more about it here:

http://googleblog.blogspot.com/2010/09/galaxy-of-your-own.html

Of course, it is the design and software that is open-source, not the actual hardware itself.  But it is an interesting start, nonetheless.  Two things in the article gave me some ideas:

  • You can hook up any where from two to “dozens” of screens potentially.
  • You can add other virtual interfaces to the set-up. In other words, it is not just limited to Google Earth.

I wonder how long it will be before someone figures out a way to use Second Life with this?  Anyways, here is my idea: First of all, you get a few dozen flat screen panels with little or not frame (kind of like they do in Sports Bars with nine screens showing four games) and put them together in a sphere shape with the screens facing inward.  Probably with a few in the back on a hinge acting as a door in.  Then you get an omnidirectional treadmill for a floor hooked up to the software in place of a joystick.  Finally, add a few motion detection cameras at key points around the sphere and a wireless microphone.  Maybe even add a glove interface of some kind for more detailed controls.  Wire all of this to work together with virtual environment of your choice (Google Earth, Second Life, World of Warcraft, you name it) – and I think we have our first rudimentary holodeck.  Maybe even someday use 3-D flat screens.

Probably pretty expensive to buy all this.  Probably also a little tough to figure out how to get all those systems to work together.  But I am sure it can be done.  So who has a grant to try this out?