Using machines/algorithms/computers to teach seems to be popping up a lot recently, with many people expressing concern over the idea that we can program computers/machines to make qualitative decisions (ie – care about the students enough to effectively teach them). The reason we want to create teaching machines is, of course, based on the seemingly insatiable desire to scale human behavior to thousands… while hiring less people to do so.
Of course, using machines to scale our empathy and care is nothing new. Answering machines are one example of scaling care – those that use answering machines care about catching phone calls while they are gone, but don’t want to hire a personal assistant to stay at their house and take messages. So in way, they are able to scale the care that they have for talking on the phone to the number of incoming phone messages that they can’t cover. Generally, if you know the person that owns the machine and you know they want to hear what you have to say, you feel that this machine is extending the communication of that empathy into the times when that person is not physically present to answer the phone.
Something about the intent, design, and personalization of answering machines makes some aspect of communicating care scalable beyond the person behind the machine.
However, somewhere between the answering machine and computerized teachers, there is a disconnect for many in feeling what they see as the necessary level of real care and empathy. Despite this, some people just want to continue down the path of computerized teaching, feeling that perfecting the program/numbers behind the system will change that disconnect. They are spending millions of dollars to create program to write custom curriculum for each student, which is ironic seeing that we used to pay human beings $10-15 an hour at Sylan Learning Center to hand write personalized curriculum plans for each learner. Maybe instead of trying to perfect computerized teaching to the point that most actually feel “cared” for – what if we tried to figure out what people actually want to have computerized and what they don’t?
For example, many people really hate how answering machines are scaled to take care of customer service calls at large companies. So what makes that usage different than the basic home consumer answering machine? There are times when people want a person and times when they don’t. For example, if you just want your account details confirmed over the phone, you may not want to talk to a person who might have no business knowing those details.
So instead of trying to force all teaching into a computer algorithm that many might not be happy with, maybe we should look at what parts learners want to have automated and what parts they don’t.
For example, if you teach online, you have probably run into at least a few posts in the Help forum that start with “I’m embarrassed to post this here, but…” followed by a basic question about procedure or other things in the syllabus. Maybe that person would prefer an automated system that answers their question without public embarrassment?
Of course, what learners want automated is often different for each learner. But it seems that the general idea is that we need to focus our research and money more on “answering machines” and less on “virtual teachers.” We need things that help us connect with people at a distance, not that replaces the people in the distance process with virtual non-people.