Many people have noticed a growing focus on automation in learning, especially around the idea of computer-graded assignments. Not just telling students that they picked the right answer on a multiple choice test, but the actual grading of term papers based on complex algorithms. EdX among others are working on systems that will grade thousands of student submissions based on what it thinks the instructor would have given the students.
Some love this idea, some are creeped out.
Students seem to love the idea of removing instructor bias from the grading equation. Or do they just love the idea that they can learn to game the algorithms? We will see in the future, of course.
But at some point, how do we know that learners have actually mastered anything if there is no intelligence in the process that really understands what the student is trying to communicate. After all, if there is anything to all of this social constructivism or connectivism stuff… what happens when one part of the equation is not really intelligent or alive and therefore not social?
Well, you might say, some day the program will get complex enough that computers will have artificial intelligence. The problem with that is, in order to have true intelligence, you have to have a bias of some kind. If someone puts your life on the line versus another person that you don’t know, you will probably fight to live. That is a bias. Or maybe you will take the high road and put the other person’s life ahead of yours. That is another bias. If a machine can not make a choice between preserving itself or thinking of others first, it is following what it was programmed to do and is not truly intelligent. And even worse, it means that it had a certain bias programmed into it by its creator.
All of this might not phase a pure empiricist/behaviorist at all. But to those of us that subscribe to anything from pragmatism to constructivism to connectivism, there are huge problems even if programmers can in some way figure out the perfect algorithm. If one side of the equation is not really intelligent – how can learning really be occurring? Even if you are a cognitivist at heart… how do you know that the computer program with the grading algorithm is compatible with the human computer we call a brain? Or how do you know that the organic brain didn’t just find a way to game the digital one? Will we really be able to create programs that see past the elaborate smoke screen that many humans are known to create?