Okay, maybe I went too extreme with the title, but saying “whoever decided to keep calling it ‘Artificial Intelligence’ after it was clear that it wasn’t ‘intelligence’ was a grifter” was too long and clunky for a post title. Or even for a sentence really.

Or maybe I should say that what we currently call “Artificial Intelligence” is not intelligence, probably never will be, and should more accurately be called “Digital Automation.”

That would be more accurate. And to be honest, I would have no problem with people carrying on with (some of) the work they are currently doing in AI if it was more accurately labeled. Have you ever noticed that so many of the people hyping the glorious future of AI are people who’s jobs depend on AI work receiving continued funding?

Anyways, I see so many people saying “AI is here, we might as well use it. Its not really different than using a calculator, spell check, or Grammerly!” that I wanted to respond to these problematic statements.

First of all, saying that since the “tech is here, we might as well use it” is a weird idea, especially coming from so many critical educators. We have the technology to quickly and easily kill people as well (not just with guns, but drones and other advances). But we don’t just say “hey, guess we have to shoot people because the technology is here!” We usually evaluate the ethics of technology to see what should be done with it, and I keep scratching my head as to why so many people give AI a pass on the core critical questions.

Take, for instance, the The College Essay is Dead article published in the Atlantic. So many people breathlessly share that article as proof that the AI is here. The only problem is – the paragraph on “Learning Styles” in the beginning of the article (the one that the author would give a B+) is completely incorrect about learning styles. It gets everything wrong. If you were really grading it, it should get a zero. Hardly anyone talks about how learning styles are “shaped.” Learning Styles are most often seen as something people magically have, without much thought given to where they came from. When is the last time you saw an article on learning styles that mentioned anything about “the interactions among learning styles and environmental and personal factors,” or anyone at all referring to “the kinds of learning we experience”? What? Kinds of learning we experience?

The AI in this case had no clue what learning styles were, so it mixed what it could find from “learning” and the world of (fashion) “style” and then made sure the grammar and structure followed all the rules it was programmed with. Hardly anyone is noticing that the cognition is still non-existence because they are being wowed by flawless grammar.

Those of us (and there is a growing number of teachers noticing this and talking about it) that are getting AI-generated submissions in our classes are noticing this. We get submissions with perfect grammar and spelling – that completely fail the assignment. Grades are plummeting because students are believing the hype and turning to AI to relieve hectic lives/schedules/etc – and finding out the hard way the that the thought leaders are lying to them.

(For me, I have made it a habit of interacting with my students and turning assignments into conversations, so I have always asked students to re-work assignments that completely miss the instructions. Its just now more and more I am asking the students with no grammar or spelling mistakes to start over. Every time, unfortunately. And I am talking to more and more teachers all over the place that say the exact same thing.)

But anyways – back to the concerning arguments about AI. Some say it is a tool just like Grammerly or spellcheck. Well, on the technical side it definitely isn’t, but I believe they are referring to the usage side… which is also incorrect. You see, Grammarly and spellcheck replaced functions that no one ever really cared if they were outsourced – you could always ask a friend or roommate or who ever “how do you spell this word?” or “hey, can you proofread this paragraph and tell me if the grammar makes sense” and no one cared. That was considered part of the writing process. Asking the same person to write the essay for you? That has always been considered unethical in many cases.

And that is the weird part to me – as much as I have railed against the way we look at cheating here, the main idea behind cheating being wrong was that people wanted students to do the work themselves. I would still agree with that part – its not really “learning” if you just pay some one else to do it. Yet no one has really advanced a “list of ten ways cool profs will use AI in education” list that accounts for the fact that learners will be outsourcing part of the learning process to machines – and decades of research shows that skipping those parts is not good for actually learning things.

This is part of the problem with AI that the hype skips over – Using AI in education shortcuts the process of learning about thinking, writing, art creation, etc. to get to the end product quicker. Even when you use AI as part of the process to generate ideas or create rough drafts, it still removes important parts of the learning process – steps we know from research are important.

But again, educational thought leaders rarely listen to instructional designers or learning scientists. Just gotta get the retweets and grants.

At some point, the AI hype is going to smash head first into the wall of the teachers that hate cheating, where swapping out a contract cheating service for an AI bot is not going to make a difference to their beliefs. It was never about who or what provided the answers, but the fact that it wasn’t the student themselves that did it. Yes, that statement says a lot about Higher Ed. But also, it means your “AI is here, might as well accept it” hot take will find few allies in that arena.

But back (again) to the AI hype. The thought that AI adoption is like calculator adoption is a bit more complicated one, but still not very accurate. First of all, that calculator debate is still not settled for some. And second of all, it was never “free-for-all calculator usage!” People realized that once you become an adult, few people care who does the math as long as it gets done correctly. You can just ask anyone working with you something like “hey – what was the total budget for the lunch session?” and as long you got the correct number, no one cares if it came from you or a calculator or asking your co-worker to add it up. But people still need to learn how math works so they can know they are getting the correct answer (and using a calculator still requires you to know basic math in order to input the numbers correctly, unlike AI that claims to do that thinking for you). So students start off without calculators to learn the basics, and then calculators are gradually worked in as learners move on to more complex topics.

However, if you are asked to write a newsletter for your company – would people be okay with the AI writing it? The answer is probably more complicated, but in many cases they might not care as long as it is accurate (they might want a human editor to check it). But if you are an architect that had to submit a written proposal about how your ideas would be the best solution to a client’s problem, I don’t think that client would want an AI doing the proposal writing work for the architect. There are many times in life where your writing is about proving you know something, and no one would accept that as an AI generated action.

Yes, of course there is a lot that can be said about instructors that create bad assignments and how education has relied too much on bad essays to prove learning when they don’t do that, but sorry AI-hypsters: you don’t get credit for just now figuring that one out when many, many of us in the ID, LS, and other related fields have been trying to raise the alarm on that for decades. You are right, but we see how (most of) you are just now taking that one seriously… just because it strengthens your hype.

Many people are also missing what tools like ChatGPT are designed to do as well. I see so many lists of “how to use ChatGPT in education” that talk about using it as a tool to help generate ideas and answers to questions. However, it is actually a tool that aims to create final answers, essays, paintings, etc. We have had tools to generate ideas and look for answers for decades now – ChatGPT just spits out what those tools have always done when asked (but not always that accurately, as many have noted). The goal of many of the people that are creating these AI tools is to replace humans for certain tasks – not just give ideas. AI has been failing at that for decades, and is not really getting anywhere closer no matter how much you want to believe the “it will only get better” hype that AI companies want you to believe.

(It has to get good at first before it can start getting better FYI.)

This leads back to my first assertion – that there is no such thing as “artificial intelligence.” You see, what we call AI doesn’t have “intelligence” – not the kind that can be “trained” or “get better” – and it is becoming more apparent to many that it never will. Take this article about how AI’s True Goal May No Longer Be Intelligence for instance. It looks at how most AI research has given up on developing true Artificial General Intelligence, and wants you to forget that they never achieved it. What that means is that through the years, some things like machine learning and data analytics were re-branded as “intelligence” because AI is easier to sell to some people than those terms. But it was never intelligent, at least not in the ways that AI companies like to refer to it as.

Sorry – we never achieved true AI, and what we call AI today is mislabeled.

“As a result, the industrialization of AI is shifting the focus from intelligence to achievement” Ray writes in the article above. This is important to note according to Yann LeCun, chief AI scientist at Facebook owner Meta Properties, who “expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as ‘true’ intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense.” Does that last part sound familiar? Those promoting a coming “Age of AI” are hinting that AI can now plan things for us, like class lessons and college papers.

(And yes, I can’t help but notice that the “Coming Age of AI” hype sounds just like the “Coming Age of Second Life” and “Coming Age of Google Wave” and so on hype. Remember how those were going to disrupt Higher Ed? Remember all the hand-wringing 5-10 years ago over how institutional leaders were not calling emergency meetings about the impact of learning analytics? I do.)

But why does “true” intelligence matter? Well, because AI companies are still selling us the idea that it will have “true” intelligence – because it will “only get better.” Back to the ZDNet article: “LeCun expresses an engineer’s concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do.”

And it only gets worse: “Industrial AI professionals don’t want to ask hard questions, they merely want things to run smoothly…. Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.”

In other words, they are getting wowed by the perfect grammar and while failing to notice that there is no cognition there, like there hasn’t been for decades.

Decades? What I mean by that is, if you look at the history section of the Artificial General Intelligence article on Wikipedia, you will see that the hype over AI has been coming and going for decades. Even this article points out that we still don’t have AI: “In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.” Terms like narrow AI, weak AI, industrial AI, and others were just marketing terms to expand the umbrella of AI to include things that did not have “intelligence,” but could be programmed to mimic human activities. The term “AI,” for now, is just a marketing gimmick. It started out in the 1950s and still has not made any real advances in cognition. It will only get better at fooling people into thinking it is “intelligent.”

A lot is being said now about how AI is going to make writing, painting, etc “obsolete.” The vinyl record would like to have a word with you about that, but even if you were to believe the hype… there is more to consider. Whenever people proclaim that something is “obsolete,” always ask “for who?” In the case of emerging technologies, the answer is almost always “for the rich elite.” Technological advances often only remain free or low cost as long as they need guinea pigs to complete it.

If you are pushing students to use a new technology now because it is free, then you are just hastening the day that they will no longer be able to afford to use it. Not to mention the day it will be weaponized against them. If you don’t believe me, look at the history of Ed-Tech. How many tools started off as free or low cost, got added to curriculum all over the place, and then had fees added or increased? How many Ed-Tech tools have proven to be harmful to the most vulnerable? Do you think AI is going to actually plot a different course than the tech before it?

No, you see – AI is often being created by certain people who have always been able to pay other people to do their work. It was just messy to deal with underpaid labor, so AI is looked at behind the scenes as a way to ditch the messy human part of exploitation. All of the ways you are recommending that ChatGPT be used in courses is just giving them free labor to perfect the tool (at least to their liking). Once they are done with that, do you really think it will stay free or low cost? Give me a break if you are gullible enough to believe it will.

Mark my words: at some point, if the “Age of AI” hype takes off, millions of teachers are suddenly going to have to shift lesson plans once their favorite AI tool is no longer free or is just shut down. Or tons of students that don’t have empathetic teachers might have to sell another organ to afford class AI requirements.

(You might notice how all of the “thoughtful  integration of AI in the classroom” articles are by people that would benefit greatly, directly or indirectly, from hundreds or thousands of more students using ChatGPT.)

On top of all of that – even if the hype does prove true, at this point AI is still untested and unproven. Why are so many people so quick to push an untested solution on their students? I even saw someone promoting how students could use ChatGPT for counseling services. WTF?!?!? That is just abusive to recommend students who need counseling to go to unproven technology, no matter how short staffed your Mental Health Services are.

But wait – there is more! People have also been raising concerns (rightly so) about how AI can be used for intentionally harmful purposes, like everything from faking revenge porn to stock market manipulation. I know most AI evangelists also express concern about this as well, but do they realize that every usage of AI for experimentation or course work will increase the ability of people to intentionally use AI for harm? I have played around with AI as much as anyone else, but I stopped because I realized that if you are using it, you are adding to the problem by helping to refine the algorythms. What does that mean to unleash classes full of students on to it for free labor to improve it?

To be clear here, I have always been a fan of AI generated art, music, and literature as a sub-genre itself. The weird and trippy things that AI has produced in the past are a fascinating look at what happens when you only have a fraction of the trillions of constructs and social understandings we have in our minds. But to treat it as a technology that is fully ready to unleash on our students without much critical thought about how dangerous that could be?

Also, have you noticed that almost every AI ethics panel out there is made entirely of pro-AI people and no skeptics? I am a fan of the work of some of these “pro-AI people,” but where is the true critical balance?

I get that we can’t just ignore AI and do nothing, but there is still a lot of ground between “sticking our heads in the sand” and “full embrace”. For now, like many instructors, I tell students just not to use it if they want to pass. I’m not sure I see that changing any time soon. Most of the good ideas on all of those lists and articles of “how to integrate AI into the classroom” are just basic teaching designs that most teachers think of in their sleep. Bur what damage is going to be done by institutional leaders and company owners rushing in to implement unproven technology? That may “disrupt” things in ways that we don’t want to see come to fruition.

Leave a Reply

Your email address will not be published. Required fields are marked *