All AI, All the Time

So, yeah – I really actually don’t want to be writing about AI all the time. It just seems to dominate the Education conversation these days, and not always in the ways that the AI Cheerleaders want it to. I am actually working on a series of posts that examine a 1992 Instructional Technology textbook to see how long ideas like learning analytics (yes, in 1992), critical pedagogy, and automation have been a part of the Ed-Tech discussion for longer than many realize. Of course, these topics have not been discussed as well as they should have been, but recent attempts to frame them as “newer concerns” are ignoring history.

I am also going to guess that Course Hero is happy that AI is taking all the heat right now?

But there are still so many problematic ideas floating around out there that I think someone needs to say something (to all five of you reading this). And I’m not talking about AI Cheerleaders that go too far, like this:

We know this stuff is too far fetched to be realistic, and even the Quoted Tweet by Kubacka points to why this line of thinking is pointed in the wrong direction.

But I am more concerned about the “good” sounding takes that unfortunately also gloss over some real complexities and issues that will probably keep AI from being useful for the foreseeable future. Unfortunately, we also know that the untenable-ness of a solution usually doesn’t stop businesses and institutions from still going all in. Anyways, speaking of “all in,” let’s start with the first problematic take:

No One is Paying Attention to AI

Have you been on Twitter or watched the news recently? There are A LOT of people involved and engaged with it – mostly being ignored if we don’t tow the approved line by the “few entrepreneurs and engineers” that control its development. I just don’t see how anyone can take the “everyone is sticking their head in the sand” argument seriously any more. Some people get a bit more focused about how Higher Ed is not paying attention, but that is also not true.

So where are the statements and policies from upper level admin on AI? Well, there are some, but really – when has there been a statement or action about something that is discipline-specific? Oh, sorry – did you not know that there are several fields that don’t have written essay assignments? Whether they stick with standardized quizzes for everything, or are more focused on hands-on skills – or whatever it might be – please keep in mind that essays and papers aren’t as ubiquitous as they used to be (and even back in their heyday, probably still not a majority).

This just shows the myopic nature of so many futurist-lite takes: it affects my field, so we must get a statement from the President of the University on new polices and tools!

The AI Revolution is Causing Rare Societal Transformation

There are many versions of this, from the (mistaken) idea that for thousands of years “technologies that our ancestors used in their childhood were still central to their lives in their old age” to the (also mistaken) idea that we are in a rare societal upheaval on the level of the agricultural and industrial revolutions (which are actually by far not the only two). Just because we look down on agricultural or stone age innovations that happened every generation as “not cool enough,” it doesn’t mean they weren’t happening. Just find a History major and maybe talk with them about any Historical points, please?

Pretty much every generation for thousands of years have had new tools or ideas that changed over their lifetime, whether it was a new plowing tool or a better housing design or whatnot. And there have been more that two revolutions that have caused major transformation throughout human history. Again – find a person with a History degree or two and listen to them.

AI is Getting Better All the Time

GPT-3 was supposed to be the “better” that would begin to take over for humans, but when people started pointing out all of the problems with it (and everything it failed at), the same old AI company line came out: “it will only get better.”

It will improve – but will it actually get better? We will see.

I know that the AI hype is something that many people just started paying attention to over the past 5-10 years, but many of us have also been following for decades. And we have noticed that, while the outputs get more refined and polished, the underlying ability to think and perform cognitive tasks has not advanced beyond “nothing” since the beginning.

And usually you will hear people point out that a majority of AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next few decade. Well… it is complicated. Actual studies usually ask more tricky questions like “if there is a 50% change of developing human level AI” at some point in the future – and over half of the experts are saying there is a 50% chance of it happening in a time frame beyond most of our lifetimes (2060). Of course, your bias will affect how you read that statistic, but to me, having 50% of experts say there is a 50% chance of true human-level artificial intelligence in the far future… is not a real chance at all.

Part of the problem is the belief that more data will cross the divide into Artificial General Intelligence, but it is looking less and less likely that this is so. Remember – GPT-3 was supposed to be the miracle of what happens when we can get more data and computing speed into the mix… and it proved to fall short in many ways. And people are taking notice:

https://twitter.com/plevy/status/1607859669288075272

I covered some of this in my last post on AI as well. So is AI getting better all the time? Probably not in the ways you think it is. But the answer is complicated – and what matters more is asking who is gets better for and what purposes it gets better at. Grammar rules? Sure. Thinking more and more like humans? Probably not.

We Should Embrace AI, Not Fear It

It’s always interesting how so many of these conversations just deal with two extremes (“fully embrace” vs “stick head in the sand”), as if there are no options between the two. To me, I have yet to see a good case for why we have to embrace it (and laid out some concerns on that front in my last blog post). “Idea generation” is not really a good reason, since there have been thousands of websites for that for a long time.

But what if the fears are not unfounded? Autumm Caines does an excellent job of exploring some important ethical concerns with AI tools. AI is a data grab at the moment, and it is extracting free labor from every person that uses it. And beyond that, have you thought about what it can be used for as it improves? I mean the abusive side: deep fakes, fake revenge porn, you name it. The more students you send to it, the better it gets at all aspects of faking the world (and people that use it for nefarious purposes will give it more believable scripts to follow, as they will want to exercise more control. They just need the polished output to be better, something that is happening).

“But I don’t help those companies!!!” you might protest, not realizing where “those companies” get their AI updates from.

Educators Won’t Be Able to Detect AI Generated Assignments

I always find it interesting that some AI proponents speak about AI as if students usage will start to happen in the future – even though those of us that are teaching have been seeing AI-generated submissions for months. As I covered in a previous post, I have spoken to dozens and dozens of teachers in person and online that are noting how some students are turning in work with flawless grammar and spelling, but completely missing any cognition or adherence to assignment guidelines. AI submissions are here, and assignments with any level of complexity are easily exposing it.

However, due to studies like this one, proponents of AI are convinced teachers will have a hard time recognizing AI-generated work. But hold on a second – this study is “assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).” First of all, teacher usually are experts, and second of all, few assignments are as basic as stories, news, or recipes. Even games that test your ability to detect AI-generate content are frustratingly far away from most course assignment designs (and actually, not that hard to still detect if you try the test yourself).

People That Don’t Like AI Are Resistant Due to Science Fiction

Some have a problem believing in AI because it is used in Sci-Fi films and TV shows? Ugh. Okay, now find a sociologist. It is more likely that AI Cheerleaders believe that AI will change the world because of what they saw in movies, not the other way around.

I see this idea one trotted out every once and a while, and… just don’t listen to people that want to use this line of thinking.

AI is Advancing Because It Beats Humans at [insert game name here]

Critics have (correctly) been pointing out that AI being able to beat humans at Chess or other complex strategy games is not really that impressive at all. Closed systems with defined rules (no matter how complex) are not that hard to program in general – it just took time and increases in computing power. But even things like PHP and earlier programming languages could be programmed to beat humans with the same time and computing power.

Where Do We Go From Here?

Well, obviously, no one is listening to people like me that have to deal with AI-generated junk submissions in the here and now. You will never see any of us on an AI panel anywhere. I realize there is a big push now (which is really just a continuation of the same old “cheaters” push) to create software that can detect AI-generated content. Which will then be circumvented by students, a move that will in turn lead to more cheating detection surveillance. Another direction many will go is to join the movement way from written answers and more towards standardized tests. While there are many ways to create assignments that will still not be replicable in AI any time in the near future, probably too few will listen to the “better pedagogy” calls as usual. The AI hype will fade as it usually does (as Audrey Watters has taught us with her research), but the damage from that hype will still remain.

AI is a Grift

Okay, maybe I went too extreme with the title, but saying “whoever decided to keep calling it ‘Artificial Intelligence’ after it was clear that it wasn’t ‘intelligence’ was a grifter” was too long and clunky for a post title. Or even for a sentence really.

Or maybe I should say that what we currently call “Artificial Intelligence” is not intelligence, probably never will be, and should more accurately be called “Digital Automation.”

That would be more accurate. And to be honest, I would have no problem with people carrying on with (some of) the work they are currently doing in AI if it was more accurately labeled. Have you ever noticed that so many of the people hyping the glorious future of AI are people who’s jobs depend on AI work receiving continued funding?

Anyways, I see so many people saying “AI is here, we might as well use it. Its not really different than using a calculator, spell check, or Grammerly!” that I wanted to respond to these problematic statements.

First of all, saying that since the “tech is here, we might as well use it” is a weird idea, especially coming from so many critical educators. We have the technology to quickly and easily kill people as well (not just with guns, but drones and other advances). But we don’t just say “hey, guess we have to shoot people because the technology is here!” We usually evaluate the ethics of technology to see what should be done with it, and I keep scratching my head as to why so many people give AI a pass on the core critical questions.

Take, for instance, the The College Essay is Dead article published in the Atlantic. So many people breathlessly share that article as proof that the AI is here. The only problem is – the paragraph on “Learning Styles” in the beginning of the article (the one that the author would give a B+) is completely incorrect about learning styles. It gets everything wrong. If you were really grading it, it should get a zero. Hardly anyone talks about how learning styles are “shaped.” Learning Styles are most often seen as something people magically have, without much thought given to where they came from. When is the last time you saw an article on learning styles that mentioned anything about “the interactions among learning styles and environmental and personal factors,” or anyone at all referring to “the kinds of learning we experience”? What? Kinds of learning we experience?

The AI in this case had no clue what learning styles were, so it mixed what it could find from “learning” and the world of (fashion) “style” and then made sure the grammar and structure followed all the rules it was programmed with. Hardly anyone is noticing that the cognition is still non-existence because they are being wowed by flawless grammar.

Those of us (and there is a growing number of teachers noticing this and talking about it) that are getting AI-generated submissions in our classes are noticing this. We get submissions with perfect grammar and spelling – that completely fail the assignment. Grades are plummeting because students are believing the hype and turning to AI to relieve hectic lives/schedules/etc – and finding out the hard way the that the thought leaders are lying to them.

(For me, I have made it a habit of interacting with my students and turning assignments into conversations, so I have always asked students to re-work assignments that completely miss the instructions. Its just now more and more I am asking the students with no grammar or spelling mistakes to start over. Every time, unfortunately. And I am talking to more and more teachers all over the place that say the exact same thing.)

But anyways – back to the concerning arguments about AI. Some say it is a tool just like Grammerly or spellcheck. Well, on the technical side it definitely isn’t, but I believe they are referring to the usage side… which is also incorrect. You see, Grammarly and spellcheck replaced functions that no one ever really cared if they were outsourced – you could always ask a friend or roommate or who ever “how do you spell this word?” or “hey, can you proofread this paragraph and tell me if the grammar makes sense” and no one cared. That was considered part of the writing process. Asking the same person to write the essay for you? That has always been considered unethical in many cases.

And that is the weird part to me – as much as I have railed against the way we look at cheating here, the main idea behind cheating being wrong was that people wanted students to do the work themselves. I would still agree with that part – its not really “learning” if you just pay some one else to do it. Yet no one has really advanced a “list of ten ways cool profs will use AI in education” list that accounts for the fact that learners will be outsourcing part of the learning process to machines – and decades of research shows that skipping those parts is not good for actually learning things.

This is part of the problem with AI that the hype skips over – Using AI in education shortcuts the process of learning about thinking, writing, art creation, etc. to get to the end product quicker. Even when you use AI as part of the process to generate ideas or create rough drafts, it still removes important parts of the learning process – steps we know from research are important.

But again, educational thought leaders rarely listen to instructional designers or learning scientists. Just gotta get the retweets and grants.

At some point, the AI hype is going to smash head first into the wall of the teachers that hate cheating, where swapping out a contract cheating service for an AI bot is not going to make a difference to their beliefs. It was never about who or what provided the answers, but the fact that it wasn’t the student themselves that did it. Yes, that statement says a lot about Higher Ed. But also, it means your “AI is here, might as well accept it” hot take will find few allies in that arena.

But back (again) to the AI hype. The thought that AI adoption is like calculator adoption is a bit more complicated one, but still not very accurate. First of all, that calculator debate is still not settled for some. And second of all, it was never “free-for-all calculator usage!” People realized that once you become an adult, few people care who does the math as long as it gets done correctly. You can just ask anyone working with you something like “hey – what was the total budget for the lunch session?” and as long you got the correct number, no one cares if it came from you or a calculator or asking your co-worker to add it up. But people still need to learn how math works so they can know they are getting the correct answer (and using a calculator still requires you to know basic math in order to input the numbers correctly, unlike AI that claims to do that thinking for you). So students start off without calculators to learn the basics, and then calculators are gradually worked in as learners move on to more complex topics.

However, if you are asked to write a newsletter for your company – would people be okay with the AI writing it? The answer is probably more complicated, but in many cases they might not care as long as it is accurate (they might want a human editor to check it). But if you are an architect that had to submit a written proposal about how your ideas would be the best solution to a client’s problem, I don’t think that client would want an AI doing the proposal writing work for the architect. There are many times in life where your writing is about proving you know something, and no one would accept that as an AI generated action.

Yes, of course there is a lot that can be said about instructors that create bad assignments and how education has relied too much on bad essays to prove learning when they don’t do that, but sorry AI-hypsters: you don’t get credit for just now figuring that one out when many, many of us in the ID, LS, and other related fields have been trying to raise the alarm on that for decades. You are right, but we see how (most of) you are just now taking that one seriously… just because it strengthens your hype.

Many people are also missing what tools like ChatGPT are designed to do as well. I see so many lists of “how to use ChatGPT in education” that talk about using it as a tool to help generate ideas and answers to questions. However, it is actually a tool that aims to create final answers, essays, paintings, etc. We have had tools to generate ideas and look for answers for decades now – ChatGPT just spits out what those tools have always done when asked (but not always that accurately, as many have noted). The goal of many of the people that are creating these AI tools is to replace humans for certain tasks – not just give ideas. AI has been failing at that for decades, and is not really getting anywhere closer no matter how much you want to believe the “it will only get better” hype that AI companies want you to believe.

(It has to get good at first before it can start getting better FYI.)

This leads back to my first assertion – that there is no such thing as “artificial intelligence.” You see, what we call AI doesn’t have “intelligence” – not the kind that can be “trained” or “get better” – and it is becoming more apparent to many that it never will. Take this article about how AI’s True Goal May No Longer Be Intelligence for instance. It looks at how most AI research has given up on developing true Artificial General Intelligence, and wants you to forget that they never achieved it. What that means is that through the years, some things like machine learning and data analytics were re-branded as “intelligence” because AI is easier to sell to some people than those terms. But it was never intelligent, at least not in the ways that AI companies like to refer to it as.

Sorry – we never achieved true AI, and what we call AI today is mislabeled.

“As a result, the industrialization of AI is shifting the focus from intelligence to achievement” Ray writes in the article above. This is important to note according to Yann LeCun, chief AI scientist at Facebook owner Meta Properties, who “expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as ‘true’ intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense.” Does that last part sound familiar? Those promoting a coming “Age of AI” are hinting that AI can now plan things for us, like class lessons and college papers.

(And yes, I can’t help but notice that the “Coming Age of AI” hype sounds just like the “Coming Age of Second Life” and “Coming Age of Google Wave” and so on hype. Remember how those were going to disrupt Higher Ed? Remember all the hand-wringing 5-10 years ago over how institutional leaders were not calling emergency meetings about the impact of learning analytics? I do.)

But why does “true” intelligence matter? Well, because AI companies are still selling us the idea that it will have “true” intelligence – because it will “only get better.” Back to the ZDNet article: “LeCun expresses an engineer’s concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do.”

And it only gets worse: “Industrial AI professionals don’t want to ask hard questions, they merely want things to run smoothly…. Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.”

In other words, they are getting wowed by the perfect grammar and while failing to notice that there is no cognition there, like there hasn’t been for decades.

Decades? What I mean by that is, if you look at the history section of the Artificial General Intelligence article on Wikipedia, you will see that the hype over AI has been coming and going for decades. Even this article points out that we still don’t have AI: “In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.” Terms like narrow AI, weak AI, industrial AI, and others were just marketing terms to expand the umbrella of AI to include things that did not have “intelligence,” but could be programmed to mimic human activities. The term “AI,” for now, is just a marketing gimmick. It started out in the 1950s and still has not made any real advances in cognition. It will only get better at fooling people into thinking it is “intelligent.”

A lot is being said now about how AI is going to make writing, painting, etc “obsolete.” The vinyl record would like to have a word with you about that, but even if you were to believe the hype… there is more to consider. Whenever people proclaim that something is “obsolete,” always ask “for who?” In the case of emerging technologies, the answer is almost always “for the rich elite.” Technological advances often only remain free or low cost as long as they need guinea pigs to complete it.

If you are pushing students to use a new technology now because it is free, then you are just hastening the day that they will no longer be able to afford to use it. Not to mention the day it will be weaponized against them. If you don’t believe me, look at the history of Ed-Tech. How many tools started off as free or low cost, got added to curriculum all over the place, and then had fees added or increased? How many Ed-Tech tools have proven to be harmful to the most vulnerable? Do you think AI is going to actually plot a different course than the tech before it?

No, you see – AI is often being created by certain people who have always been able to pay other people to do their work. It was just messy to deal with underpaid labor, so AI is looked at behind the scenes as a way to ditch the messy human part of exploitation. All of the ways you are recommending that ChatGPT be used in courses is just giving them free labor to perfect the tool (at least to their liking). Once they are done with that, do you really think it will stay free or low cost? Give me a break if you are gullible enough to believe it will.

Mark my words: at some point, if the “Age of AI” hype takes off, millions of teachers are suddenly going to have to shift lesson plans once their favorite AI tool is no longer free or is just shut down. Or tons of students that don’t have empathetic teachers might have to sell another organ to afford class AI requirements.

(You might notice how all of the “thoughtful  integration of AI in the classroom” articles are by people that would benefit greatly, directly or indirectly, from hundreds or thousands of more students using ChatGPT.)

On top of all of that – even if the hype does prove true, at this point AI is still untested and unproven. Why are so many people so quick to push an untested solution on their students? I even saw someone promoting how students could use ChatGPT for counseling services. WTF?!?!? That is just abusive to recommend students who need counseling to go to unproven technology, no matter how short staffed your Mental Health Services are.

But wait – there is more! People have also been raising concerns (rightly so) about how AI can be used for intentionally harmful purposes, like everything from faking revenge porn to stock market manipulation. I know most AI evangelists also express concern about this as well, but do they realize that every usage of AI for experimentation or course work will increase the ability of people to intentionally use AI for harm? I have played around with AI as much as anyone else, but I stopped because I realized that if you are using it, you are adding to the problem by helping to refine the algorythms. What does that mean to unleash classes full of students on to it for free labor to improve it?

To be clear here, I have always been a fan of AI generated art, music, and literature as a sub-genre itself. The weird and trippy things that AI has produced in the past are a fascinating look at what happens when you only have a fraction of the trillions of constructs and social understandings we have in our minds. But to treat it as a technology that is fully ready to unleash on our students without much critical thought about how dangerous that could be?

Also, have you noticed that almost every AI ethics panel out there is made entirely of pro-AI people and no skeptics? I am a fan of the work of some of these “pro-AI people,” but where is the true critical balance?

I get that we can’t just ignore AI and do nothing, but there is still a lot of ground between “sticking our heads in the sand” and “full embrace”. For now, like many instructors, I tell students just not to use it if they want to pass. I’m not sure I see that changing any time soon. Most of the good ideas on all of those lists and articles of “how to integrate AI into the classroom” are just basic teaching designs that most teachers think of in their sleep. Bur what damage is going to be done by institutional leaders and company owners rushing in to implement unproven technology? That may “disrupt” things in ways that we don’t want to see come to fruition.