So, yeah – I really actually don’t want to be writing about AI all the time. It just seems to dominate the Education conversation these days, and not always in the ways that the AI Cheerleaders™ want it to. I am actually working on a series of posts that examine a 1992 Instructional Technology textbook to see how long ideas like learning analytics (yes, in 1992), critical pedagogy, and automation have been a part of the Ed-Tech discussion for longer than many realize. Of course, these topics have not been discussed as well as they should have been, but recent attempts to frame them as “newer concerns” are ignoring history.
I am also going to guess that Course Hero is happy that AI is taking all the heat right now?
But there are still so many problematic ideas floating around out there that I think someone needs to say something (to all five of you reading this). And I’m not talking about AI Cheerleaders™ that go too far, like this:
More proof that ChatGPT is ideally suited to replace the modern research university? https://t.co/miUzGjwgUL
— Marc Andreessen (@pmarca) December 23, 2022
We know this stuff is too far fetched to be realistic, and even the Quoted Tweet by Kubacka points to why this line of thinking is pointed in the wrong direction.
But I am more concerned about the “good” sounding takes that unfortunately also gloss over some real complexities and issues that will probably keep AI from being useful for the foreseeable future. Unfortunately, we also know that the untenable-ness of a solution usually doesn’t stop businesses and institutions from still going all in. Anyways, speaking of “all in,” let’s start with the first problematic take:
No One is Paying Attention to AI
Have you been on Twitter or watched the news recently? There are A LOT of people involved and engaged with it – mostly being ignored if we don’t tow the approved line by the “few entrepreneurs and engineers” that control its development. I just don’t see how anyone can take the “everyone is sticking their head in the sand” argument seriously any more. Some people get a bit more focused about how Higher Ed is not paying attention, but that is also not true.
So where are the statements and policies from upper level admin on AI? Well, there are some, but really – when has there been a statement or action about something that is discipline-specific? Oh, sorry – did you not know that there are several fields that don’t have written essay assignments? Whether they stick with standardized quizzes for everything, or are more focused on hands-on skills – or whatever it might be – please keep in mind that essays and papers aren’t as ubiquitous as they used to be (and even back in their heyday, probably still not a majority).
This just shows the myopic nature of so many futurist-lite takes: it affects my field, so we must get a statement from the President of the University on new polices and tools!
The AI Revolution is Causing Rare Societal Transformation
There are many versions of this, from the (mistaken) idea that for thousands of years “technologies that our ancestors used in their childhood were still central to their lives in their old age” to the (also mistaken) idea that we are in a rare societal upheaval on the level of the agricultural and industrial revolutions (which are actually by far not the only two). Just because we look down on agricultural or stone age innovations that happened every generation as “not cool enough,” it doesn’t mean they weren’t happening. Just find a History major and maybe talk with them about any Historical points, please?
Pretty much every generation for thousands of years have had new tools or ideas that changed over their lifetime, whether it was a new plowing tool or a better housing design or whatnot. And there have been more that two revolutions that have caused major transformation throughout human history. Again – find a person with a History degree or two and listen to them.
AI is Getting Better All the Time
GPT-3 was supposed to be the “better” that would begin to take over for humans, but when people started pointing out all of the problems with it (and everything it failed at), the same old AI company line came out: “it will only get better.”
It will improve – but will it actually get better? We will see.
I know that the AI hype is something that many people just started paying attention to over the past 5-10 years, but many of us have also been following for decades. And we have noticed that, while the outputs get more refined and polished, the underlying ability to think and perform cognitive tasks has not advanced beyond “nothing” since the beginning.
And usually you will hear people point out that a majority of AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next few decade. Well… it is complicated. Actual studies usually ask more tricky questions like “if there is a 50% change of developing human level AI” at some point in the future – and over half of the experts are saying there is a 50% chance of it happening in a time frame beyond most of our lifetimes (2060). Of course, your bias will affect how you read that statistic, but to me, having 50% of experts say there is a 50% chance of true human-level artificial intelligence in the far future… is not a real chance at all.
Part of the problem is the belief that more data will cross the divide into Artificial General Intelligence, but it is looking less and less likely that this is so. Remember – GPT-3 was supposed to be the miracle of what happens when we can get more data and computing speed into the mix… and it proved to fall short in many ways. And people are taking notice:
I covered some of this in my last post on AI as well. So is AI getting better all the time? Probably not in the ways you think it is. But the answer is complicated – and what matters more is asking who is gets better for and what purposes it gets better at. Grammar rules? Sure. Thinking more and more like humans? Probably not.
We Should Embrace AI, Not Fear It
It’s always interesting how so many of these conversations just deal with two extremes (“fully embrace” vs “stick head in the sand”), as if there are no options between the two. To me, I have yet to see a good case for why we have to embrace it (and laid out some concerns on that front in my last blog post). “Idea generation” is not really a good reason, since there have been thousands of websites for that for a long time.
But what if the fears are not unfounded? Autumm Caines does an excellent job of exploring some important ethical concerns with AI tools. AI is a data grab at the moment, and it is extracting free labor from every person that uses it. And beyond that, have you thought about what it can be used for as it improves? I mean the abusive side: deep fakes, fake revenge porn, you name it. The more students you send to it, the better it gets at all aspects of faking the world (and people that use it for nefarious purposes will give it more believable scripts to follow, as they will want to exercise more control. They just need the polished output to be better, something that is happening).
“But I don’t help those companies!!!” you might protest, not realizing where “those companies” get their AI updates from.
Educators Won’t Be Able to Detect AI Generated Assignments
I always find it interesting that some AI proponents speak about AI as if students usage will start to happen in the future – even though those of us that are teaching have been seeing AI-generated submissions for months. As I covered in a previous post, I have spoken to dozens and dozens of teachers in person and online that are noting how some students are turning in work with flawless grammar and spelling, but completely missing any cognition or adherence to assignment guidelines. AI submissions are here, and assignments with any level of complexity are easily exposing it.
However, due to studies like this one, proponents of AI are convinced teachers will have a hard time recognizing AI-generated work. But hold on a second – this study is “assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).” First of all, teacher usually are experts, and second of all, few assignments are as basic as stories, news, or recipes. Even games that test your ability to detect AI-generate content are frustratingly far away from most course assignment designs (and actually, not that hard to still detect if you try the test yourself).
People That Don’t Like AI Are Resistant Due to Science Fiction
Some have a problem believing in AI because it is used in Sci-Fi films and TV shows? Ugh. Okay, now find a sociologist. It is more likely that AI Cheerleaders™ believe that AI will change the world because of what they saw in movies, not the other way around.
I see this idea one trotted out every once and a while, and… just don’t listen to people that want to use this line of thinking.
AI is Advancing Because It Beats Humans at [insert game name here]
Critics have (correctly) been pointing out that AI being able to beat humans at Chess or other complex strategy games is not really that impressive at all. Closed systems with defined rules (no matter how complex) are not that hard to program in general – it just took time and increases in computing power. But even things like PHP and earlier programming languages could be programmed to beat humans with the same time and computing power.
Where Do We Go From Here?
Well, obviously, no one is listening to people like me that have to deal with AI-generated junk submissions in the here and now. You will never see any of us on an AI panel anywhere. I realize there is a big push now (which is really just a continuation of the same old “cheaters” push) to create software that can detect AI-generated content. Which will then be circumvented by students, a move that will in turn lead to more cheating detection surveillance. Another direction many will go is to join the movement way from written answers and more towards standardized tests. While there are many ways to create assignments that will still not be replicable in AI any time in the near future, probably too few will listen to the “better pedagogy” calls as usual. The AI hype will fade as it usually does (as Audrey Watters has taught us with her research), but the damage from that hype will still remain.
Matt is currently an Instructional Designer II at Orbis Education and a Part-Time Instructor at the University of Texas Rio Grande Valley. Previously he worked as a Learning Innovation Researcher with the UT Arlington LINK Research Lab. His work focuses on learning theory, Heutagogy, and learner agency. Matt holds a Ph.D. in Learning Technologies from the University of North Texas, a Master of Education in Educational Technology from UT Brownsville, and a Bachelors of Science in Education from Baylor University. His research interests include instructional design, learning pathways, sociocultural theory, heutagogy, virtual reality, and open networked learning. He has a background in instructional design and teaching at both the secondary and university levels and has been an active blogger and conference presenter. He also enjoys networking and collaborative efforts involving faculty, students, administration, and anyone involved in the education process.