Every time I start a new post over the past couple of months, it just devolves into a “I told you so” take on some aspect of AI. Which, of course, just gets a little old after trying to tell people that learning analytics, virtual reality, blockchain, MOOCs, etc, etc, etc are all just like any other Ed-Tech trend. It’s never really different this time. I read the recent “news” that claims “no one” can say why ChatGPT is producing unhinged answers (or bizarre nonsense as others called it). Except for, of course, many people (even some working in AI) said this would happen and gave reasons why a while back. So, as usual, they mean “no one that we listen to knows why.” Can’t give the naysayers any credibility for knowing anything. Just look at any AI in education conference panels that never bring in any true skeptics. It’s always the same this time.

Imagine working a job completely dependent on ChatGPT “prompt engineering” and hearing about this, or spending big money to start a degree in AI, or investing in an AI technology for your school, or any other way people are going big with unproven technology like this?  Especially when OpenAI just shrugs and says “Model behavior can be unpredictable.” We found out last week just how many new “AI solutions” are just feeding prompts secretly to ChatGPT in the background.

Buried at the end of that Popular Science article is probably what should be called out more: “While we can’t say exactly what caused ChatGPT’s most recent hiccups, we can say with confidence what it almost certainly wasn’t: AI suddenly exhibiting human-like tendencies.” Anyone that tries to compare AI to human learning or creativity is just using bad science.

To be honest, I haven’t paid much attention to the responses (for or against) my recent blog posts, just because too many people have bought the “AI is inevitable” Kool-Aid. I am the weirdo that believes education can choose it’s own future if we ever just would choose to ignore the thought leaders and big money interests. Recently Ben Williamson outlined 21 issues that show why AI in Education is a Public Problem with the ultimate goal of demonstrating how AI in education “cannot be considered inevitable, beneficial or transformative in any straightforward way.” I suggest reading the whole article if you haven’t already.

Some of the responses to Williamson’s article are saying that “nobody is actually proposing” what he wrote about. This seems to ignore all of the people all over the Internet that are, not to mention that there have been entire conferences dedicated to saying that AI is inevitable, beneficial, and transformative. I know that many people have written responses to Williamson’s 21 issues, and most of it boils down to saying “it happened elsewhere so you can’t blame AI” or “I haven’t heard of it, so it can’t be true.”

Yes, I know – Williamson’s whole point was to show how AI is continuing troubling trends in education. We can (an should) focus on AI or anything else that continues those trends. And he linked to articles that talked about each issue he was highlighting, so claiming no one is saying what he cited them as saying is odd. AI enthusiasts are some of the last holdouts on Xitter, so I can’t blame people that are no longer active there for not knowing what is being spread all over Elon Musk’s billion dollar personal toilet. Williamson is there, and he is paying attention.

I am tempted to go through the various “21 counterpoints / 21 refutations / 21 answers / etc” threads, but I don’t really see the point. Williamson was clear that he takes an extreme position against using AI in schools. Anyone that refutes every point, even with nuance, is just taking an extreme position in a different direction. To do the same would just circle back to Williamson’s points. Williamson is just trying to bring attention to the harms of AI. These harms are rarely addressed. Some conferences will maybe have a session or two (out of dozens and dozens of session) that talk about harms and concerns. Usually “balanced out” with points about benefits and inevitability. Articles might dedicated a paragraph or two. Keynotes might make mention of how “harms need to be addressed.” But how can we ever address those harms if we rarely talk about them on their own (outside of “pros and cons” arguments), or just refute every point anyone makes about their real impact?

Of course, the biggest (but certainly not best) institutional argument against AI in schools comes from OpenAI saying that it would be “impossible to train today’s leading AI models without using copyrighted materials” (materials that they are not compensating the copyright holders for their intellectual property FYI). Using ChatGPT (and any AI solution that followed a similar model) is a direct violation of most school’s academic integrity statements – if anyone actually really meant what they wrote about respecting copyright.

I could also go into “I told you so”s about other things as well. Like how a Google study found that there is little evidence that AI transformer model’s “in-context learning behavior is capable of generalizing beyond their pretraining data” (in other words, AI still doesn’t have the ability to be creative). Or how the racial problems with AI aren’t going away or getting better (Google said that they can’t promise that their AI “won’t occasionally generate embarrassing, inaccurate, or offensive results”). Or how AI is just a fancy form of pattern recognition that is nowhere near equatable to human intelligence. Or how AI takes more time and resources to fix than just doing it yourself first. Or so on and so forth.

(Of course, very little of what I say here is really my original thought – it comes from others that I see as experts. But some people like to make it seem like I making up a bunch of problems out of thin air.)

For those of us that actually have to respond to AI and use AI tools in actual classrooms, AI (especially ChatGPT) has been mostly a headache. It increases administration time due to dealing with all the bad output it generates that needs to be fixed. Promises of “personalized learning for all” are almost meh at best (on a good day). The ever present existence of the uncanny valley (that no can really seem to fix) makes application to real world scenarios pointless.

Many are saying that it is time to rethink the role of the humans in education. The role of humans has always been to learn, but there never really was one defined way to do that role. In fact, the practive of learning has always been evolving and changing since the dawn of time. Only people stuck in the “classrooms haven’t changed in 100 years” myth would think we need to rethink the role of humans – and I know the people saying this don’t believe in that myth. I wish we had more bold leaders that would take the opposite stance against AI, so that we can avoid an educational future that “could also lead to a further devaluation of the intrinsic value of studying and learning” as Williamson puts it.

Speaking of leadership, there are many that say that universities have a “lack of strong institutional leadership.” That is kind of a weird thing to say, as very few people make it to the top of institutions without a strong ability to lead. They often don’t lead in the way people want, but that doesn’t mean they aren’t strong. In talking with some of these leaders, they just don’t see a good case that AI has value now or even in the future. So they are strongly leading institutions in a direction they do see value. I wish it would be towards a future that sees the intrinsic value of studying and learning. But I doubt that will be the case, either.

Leave a Reply

Your email address will not be published. Required fields are marked *