It’s always weird when someone accuses me of “coming up with” some extreme hate take on AI. They immediately let me know they are not reading that many different opinions on AI – because they would easily see who I am listening to more and more as time goes by (it not the “AI is inevitable” crowd). Even the title of this post is a direct nod to two of the articles I will reference in this post. If you didn’t know that by reading it, time to expand your reading circles some.

But lest someone think it is me coming up with my own anti-AI takes, read this article by Edward Zitron titled Have We Reached Peak AI? (hint: look at the first half of my post title now). You don’t have to like Zitron, but he does cite his points and includes a lot of nuance. His point is that a lot of people have been hoodwinked into believing AI is inevitable, therefore a lot of people at the top need you to believe that this true so they won’t lose funding:

“Sam Altman desperately needs you to believe that generative AI will be essential, inevitable and intractable, because if you don’t, you’ll suddenly realize that trillions of dollars of market capitalization and revenue are being blown on something remarkably mediocre.”

You could also rephrase this to say “[insert name of thought leader here] desperately needs you to believe that generative AI will be essential, inevitable and intractable in education, because if you don’t, you’ll suddenly realize that millions (probably soon to be billions) of dollars of grant funding are being blown on something remarkably mediocre.” Mike Caufield said it better on one of those alt-Xitter sites:

“I do think the bubble will pop soon, but my gosh there is so much money (and so many research dollars) chasing this. It’s more real than MOOCs and blockchain, less impactful so far than the invention of spreadsheets. Spreadsheets were quite impactful btw, just limited to some specific areas of application. But also spreadsheets also weren’t a spam engine.”

Anyways – at least read the entire article by Zitron so that when all of this falls apart, you maybe will have an idea of why?

If you think Zitron is overblowing things, then consider this: Audrey Watters quit Ed-Tech writing a few years ago… but the AI hype has gotten so bad that she still had to come back and make a comment about it:

“…much of the early history of artificial intelligence too, ever since folks cleverly rebranded it from “cybernetics,” was deeply intertwined with the building of various chatbots and robot tutors. So if you’re out there today trying to convince people that AI in education is something brand new, you’re either a liar or a fool – or maybe both. Oh, but Audrey. This time, it’s different.”

This quote comes from a post titled I Told You So (hint: now look at the second half of my post title). Watters knows her Ed-Tech history and sees through the hype. Do you know how many “AI in Education” and “Utilizing AI in Instructional Design” sessions I have sat through in the past year that will swear it is different this time, while not mentioning any of the problems that many have brought up? And then show an example of an AI generate course or assignment that is pure garbage (while breathlessly proclaiming “how amazing” this is)?

Anyways, I wish I could have worked “Time for a Pause” into the post title without sounding clunky, but another article to read is Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good by Ben Williamson, Alex Molnar, and Faith Boninger. Some have called it extremist, others have made long threads about everything they find wrong with it. I could also sit here refuting those disagreements (like I have with posts in the past), but it would probably be ignored outside of those that are already suspicious of AI hype. But the main concern is encapsulated here:

“Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems.”

I appreciate the use of the term “pre-AI,” because it is true that we don’t have actual Artificial Intelligence and we are still pretty far away from attaining it (if it is ever possible). But as AI continues to not live up to the hype, we will probably see the constant rebranding (from cybernetics to AI to AGI to Artificial Super Intelligence and so on) to an attempt to obscure the fact that there really isn’t intelligence there (no matter how badly some people want to redefine “intelligence”).

Like I have said before, all of this is sad because there are uses for AI that can be helpful – especially in the field of accessibility. All of that could go down the drain if we do hit peak AI (or some would say “when” we do). We’ll see how this all plays out. I just hope people investing a lot in AI are listening to all sides and are making sure they are ready for every possibility. And FYI – this is NOT an April Fool’s post.

Leave a Reply

Your email address will not be published. Required fields are marked *