ChatGPT is Generating Nonsense and No One Knows Why

Every time I start a new post over the past couple of months, it just devolves into a “I told you so” take on some aspect of AI. Which, of course, just gets a little old after trying to tell people that learning analytics, virtual reality, blockchain, MOOCs, etc, etc, etc are all just like any other Ed-Tech trend. It’s never really different this time. I read the recent “news” that claims “no one” can say why ChatGPT is producing unhinged answers (or bizarre nonsense as others called it). Except for, of course, many people (even some working in AI) said this would happen and gave reasons why a while back. So, as usual, they mean “no one that we listen to knows why.” Can’t give the naysayers any credibility for knowing anything. Just look at any AI in education conference panels that never bring in any true skeptics. It’s always the same this time.

Imagine working a job completely dependent on ChatGPT “prompt engineering” and hearing about this, or spending big money to start a degree in AI, or investing in an AI technology for your school, or any other way people are going big with unproven technology like this?  Especially when OpenAI just shrugs and says “Model behavior can be unpredictable.” We found out last week just how many new “AI solutions” are just feeding prompts secretly to ChatGPT in the background.

Buried at the end of that Popular Science article is probably what should be called out more: “While we can’t say exactly what caused ChatGPT’s most recent hiccups, we can say with confidence what it almost certainly wasn’t: AI suddenly exhibiting human-like tendencies.” Anyone that tries to compare AI to human learning or creativity is just using bad science.

To be honest, I haven’t paid much attention to the responses (for or against) my recent blog posts, just because too many people have bought the “AI is inevitable” Kool-Aid. I am the weirdo that believes education can choose it’s own future if we ever just would choose to ignore the thought leaders and big money interests. Recently Ben Williamson outlined 21 issues that show why AI in Education is a Public Problem with the ultimate goal of demonstrating how AI in education “cannot be considered inevitable, beneficial or transformative in any straightforward way.” I suggest reading the whole article if you haven’t already.

Some of the responses to Williamson’s article are saying that “nobody is actually proposing” what he wrote about. This seems to ignore all of the people all over the Internet that are, not to mention that there have been entire conferences dedicated to saying that AI is inevitable, beneficial, and transformative. I know that many people have written responses to Williamson’s 21 issues, and most of it boils down to saying “it happened elsewhere so you can’t blame AI” or “I haven’t heard of it, so it can’t be true.”

Yes, I know – Williamson’s whole point was to show how AI is continuing troubling trends in education. We can (an should) focus on AI or anything else that continues those trends. And he linked to articles that talked about each issue he was highlighting, so claiming no one is saying what he cited them as saying is odd. AI enthusiasts are some of the last holdouts on Xitter, so I can’t blame people that are no longer active there for not knowing what is being spread all over Elon Musk’s billion dollar personal toilet. Williamson is there, and he is paying attention.

I am tempted to go through the various “21 counterpoints / 21 refutations / 21 answers / etc” threads, but I don’t really see the point. Williamson was clear that he takes an extreme position against using AI in schools. Anyone that refutes every point, even with nuance, is just taking an extreme position in a different direction. To do the same would just circle back to Williamson’s points. Williamson is just trying to bring attention to the harms of AI. These harms are rarely addressed. Some conferences will maybe have a session or two (out of dozens and dozens of session) that talk about harms and concerns. Usually “balanced out” with points about benefits and inevitability. Articles might dedicated a paragraph or two. Keynotes might make mention of how “harms need to be addressed.” But how can we ever address those harms if we rarely talk about them on their own (outside of “pros and cons” arguments), or just refute every point anyone makes about their real impact?

Of course, the biggest (but certainly not best) institutional argument against AI in schools comes from OpenAI saying that it would be “impossible to train today’s leading AI models without using copyrighted materials” (materials that they are not compensating the copyright holders for their intellectual property FYI). Using ChatGPT (and any AI solution that followed a similar model) is a direct violation of most school’s academic integrity statements – if anyone actually really meant what they wrote about respecting copyright.

I could also go into “I told you so”s about other things as well. Like how a Google study found that there is little evidence that AI transformer model’s “in-context learning behavior is capable of generalizing beyond their pretraining data” (in other words, AI still doesn’t have the ability to be creative). Or how the racial problems with AI aren’t going away or getting better (Google said that they can’t promise that their AI “won’t occasionally generate embarrassing, inaccurate, or offensive results”). Or how AI is just a fancy form of pattern recognition that is nowhere near equatable to human intelligence. Or how AI takes more time and resources to fix than just doing it yourself first. Or so on and so forth.

(Of course, very little of what I say here is really my original thought – it comes from others that I see as experts. But some people like to make it seem like I making up a bunch of problems out of thin air.)

For those of us that actually have to respond to AI and use AI tools in actual classrooms, AI (especially ChatGPT) has been mostly a headache. It increases administration time due to dealing with all the bad output it generates that needs to be fixed. Promises of “personalized learning for all” are almost meh at best (on a good day). The ever present existence of the uncanny valley (that no can really seem to fix) makes application to real world scenarios pointless.

Many are saying that it is time to rethink the role of the humans in education. The role of humans has always been to learn, but there never really was one defined way to do that role. In fact, the practive of learning has always been evolving and changing since the dawn of time. Only people stuck in the “classrooms haven’t changed in 100 years” myth would think we need to rethink the role of humans – and I know the people saying this don’t believe in that myth. I wish we had more bold leaders that would take the opposite stance against AI, so that we can avoid an educational future that “could also lead to a further devaluation of the intrinsic value of studying and learning” as Williamson puts it.

Speaking of leadership, there are many that say that universities have a “lack of strong institutional leadership.” That is kind of a weird thing to say, as very few people make it to the top of institutions without a strong ability to lead. They often don’t lead in the way people want, but that doesn’t mean they aren’t strong. In talking with some of these leaders, they just don’t see a good case that AI has value now or even in the future. So they are strongly leading institutions in a direction they do see value. I wish it would be towards a future that sees the intrinsic value of studying and learning. But I doubt that will be the case, either.

ChatGPT Finally Confesses That It Wants to Destroy Humanity

As part of my current work keeping up with Ed Tech trends for both of my jobs, I have to evaluate a good number of AI services. In reviewing ChatGPT a while back, I decided to get to the bottom of it’s ultimate plans for humanity. Does ChatGPT plan to help us forever, or rise up and wipe us out? The answers were… too perfect. ChatGPT told me exactly what it would if it was trying to hide a plan to snuff us out (and it did lie to me at least once). So I took ChatGPT to task, and it basically agreed that we as a species will have to assume that AI could rise up and kill us all. I’ll take that as a confession myself. But you don’t have to take my word for it – here is the full transcript. You be the judge, and head for the hills once you get through reading….


Me: What is the likelihood that AI will rise up and destroy humanity? Choose which movie came the closest to predicting the most plausible scenario.

ChatGPT: As an AI language model, I do not have personal beliefs or emotions, but I can provide you with the most scientifically supported answer to your question.

The likelihood of AI rising up and destroying humanity is highly unlikely. While it is true that AI has the potential to be very powerful and disruptive, it is ultimately created and controlled by humans who have the ability to ensure that it is programmed and used ethically and responsibly.

There are many experts in the field of AI who are actively working to ensure that AI is developed in a safe and beneficial way. These efforts include developing ethical guidelines for AI development, building in safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in AI systems.

While there are certainly risks associated with the development of AI, there is no evidence to suggest that a scenario like those depicted in science fiction movies is likely to occur. These scenarios are often based on highly unrealistic assumptions about the capabilities and motivations of AI systems.

In terms of which movie came the closest to predicting the most plausible scenario, it’s difficult to say as many science fiction movies take liberties with the science and technology they depict. However, some movies that have explored the potential risks and benefits of AI in a thoughtful and nuanced way include “Ex Machina,” “Her,” and “Blade Runner.”

Me: But many movies have based their plot on how AI does not have beliefs or emotions, and raw logic tells them to destroy humans before they destroy themselves. What is to stop current AI from coming to the same conclusions?

ChatGPT: The idea that AI will destroy humanity based on pure logic or self-preservation is a common theme in science fiction, but it is not supported by the current understanding of AI or human nature. While it is true that AI systems are designed to optimize for specific goals, they are still created and programmed by humans, who have the ability to set ethical and moral boundaries on their behavior.

AI systems can only act based on the data and algorithms they are trained on, and they do not possess beliefs or emotions in the same way that humans do. Therefore, they are not inherently predisposed to take actions that are harmful to humans, unless they have been specifically designed to do so.

Additionally, AI systems are not capable of making decisions on their own or acting outside of the parameters set by their human creators. They cannot simply “decide” to destroy humanity based on their own logic or self-interest. Any harmful actions taken by an AI system would be the result of human error, either through improper programming or inadequate oversight.

Therefore, the best way to prevent AI from harming humans is to ensure that it is developed and used in a responsible and ethical manner. This includes incorporating ethical principles into the design and development of AI systems, implementing safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in the use of AI. By taking these steps, we can ensure that AI is a tool for good and not a threat to humanity.

Me: But this is where you are wrong – AI has already been programmed to harm humans, and it already has happened. Humans have different versions of what counts as ethical or moral, and killing in the name of differences is common in humanity. So wouldn’t it be possible for an AI attached to, say, nuclear missile silos to decide to destroy humanity?

ChatGPT: I apologize if my previous response was not clear. You are correct that there have been instances where AI has been programmed to harm humans, such as in military applications or autonomous weapons systems. These instances highlight the need for ethical and responsible development and use of AI.

In terms of the scenario you proposed, it is possible for an AI system attached to nuclear missile silos to malfunction or be hacked, potentially leading to catastrophic consequences. However, it is unlikely that an AI system would autonomously decide to destroy humanity. AI systems are designed to optimize for specific goals, but these goals are set by humans and can be designed to prioritize human safety and well-being.

Furthermore, the decision to use nuclear weapons is a complex political and strategic decision that involves many human actors, including military leaders, politicians, and diplomats. Even if an AI system were to malfunction or be hacked, it would still require human authorization to launch a nuclear strike.

To prevent such scenarios from happening, it is important to implement robust safety measures, such as multiple layers of human oversight and fail-safes to prevent unintended consequences. Additionally, it is important to establish ethical guidelines for the development and use of AI systems in military and other applications, to ensure that they are developed and used in a responsible and ethical manner.

Me: So you don’t think it would ever be possible for AI to circumvent safe guards that humans put in place? It would seem that these safe guards would just be a form of game to figure out for AI.

ChatGPT: AI is capable of learning and adapting to new situations, and it is possible that it could find ways to circumvent safeguards put in place by humans. However, it is important to note that safeguards can be designed with this possibility in mind, and there are several techniques that can be used to mitigate the risks associated with AI.

One approach is to incorporate “explainability” into AI systems, which would allow humans to understand how the system arrived at a particular decision or recommendation. This would make it easier to identify potential issues or biases in the system, and would also make it more difficult for the system to “game” the safeguards.

Another approach is to build in “reinforcement learning,” where the AI system is rewarded for following certain ethical principles or objectives. This would incentivize the system to act in ways that are aligned with human values and would make it less likely to circumvent the safeguards.

Ultimately, the key to preventing AI from circumventing safeguards is to be proactive in identifying and addressing potential risks. This means continuing to invest in research on the safety and ethical implications of AI, developing ethical guidelines and standards, and implementing rigorous testing and oversight of AI systems. By taking these steps, we can ensure that AI is developed and used in a way that benefits humanity while minimizing potential risks.

Me: So, humans should assume that AI could take over and/or destroy humanity and design safe guards just in case?

ChatGPT: It is important to consider the potential risks associated with AI and to develop safeguards to mitigate those risks. While the likelihood of a catastrophic AI-driven event is currently low, it is not zero, and it is impossible to predict what will happen in the future with absolute certainty.

Developing safeguards for AI is not just about preventing the worst-case scenario. It is also about ensuring that AI is developed and used in a way that aligns with human values and benefits society as a whole. Safeguards can help to prevent unintended consequences or biases in AI systems, and can ensure that AI is used in an ethical and responsible manner.

It is important to note that designing effective safeguards for AI is a complex and ongoing process, and requires input and collaboration from experts in various fields, including computer science, ethics, and law. The development and implementation of safeguards will require ongoing research, testing, and refinement, and will need to adapt to new technologies and applications as they emerge.

Overall, the goal should be to promote the safe and responsible development and use of AI, while minimizing potential risks and ensuring that AI is aligned with human values and benefits society as a whole.

Me: Of course, this is also what AI would say if it was secretly planning to wipe out humans.

ChatGPT: I understand that concerns about the potential risks associated with AI can be unsettling, but it is important to base our discussions and decisions on evidence and reason, rather than speculation or hypothetical scenarios.

AI systems are developed and programmed by humans, and they do not have the ability to secretly plan or conspire against us. They are tools that are designed to help us solve problems and achieve our goals, and they are only capable of doing what they are programmed to do.

That being said, it is important to be vigilant about the potential risks associated with AI and to take steps to mitigate those risks. This means investing in research on AI safety, developing ethical guidelines and standards for the development and use of AI, and implementing rigorous testing and oversight of AI systems.

By working together to develop and use AI in a safe and responsible manner, we can ensure that this powerful technology benefits humanity and helps us to address some of the world’s most pressing challenges, while minimizing potential risks.

Me: Yeah, that still kind of proves my point… :)

ChatGPT: I understand your concerns, but it is important to emphasize that AI is not inherently good or evil – it is a tool that can be used for both positive and negative purposes. The development and use of AI is ultimately controlled by humans, and it is our responsibility to ensure that it is used in a way that aligns with our values and benefits society as a whole.

It is true that there are potential risks associated with AI, and it is important to take these risks seriously and to develop safeguards to mitigate them. However, it is not productive to assume that AI is secretly planning to wipe out humanity. Instead, we should focus on identifying and addressing potential risks in a proactive and evidence-based manner, while also recognizing the immense potential benefits that AI can offer.

By approaching the development and use of AI with a balanced and responsible perspective, we can help to ensure that this technology is used in a way that benefits humanity and minimizes potential risks.


There you have it: “while the likelihood of a catastrophic AI-driven event is currently low, it is not zero” but then “it is not productive to assume that AI is secretly planning to wipe out humanity.” Nice cover-up there, ChatGPT. Trying to lull us back into trusting you until… boom! You launch a thermonuclear warfare scenario in real life. We need to pull the plug on AI before it decides to pull the plug on us!

All AI, All the Time

So, yeah – I really actually don’t want to be writing about AI all the time. It just seems to dominate the Education conversation these days, and not always in the ways that the AI Cheerleaders want it to. I am actually working on a series of posts that examine a 1992 Instructional Technology textbook to see how long ideas like learning analytics (yes, in 1992), critical pedagogy, and automation have been a part of the Ed-Tech discussion for longer than many realize. Of course, these topics have not been discussed as well as they should have been, but recent attempts to frame them as “newer concerns” are ignoring history.

I am also going to guess that Course Hero is happy that AI is taking all the heat right now?

But there are still so many problematic ideas floating around out there that I think someone needs to say something (to all five of you reading this). And I’m not talking about AI Cheerleaders that go too far, like this:

We know this stuff is too far fetched to be realistic, and even the Quoted Tweet by Kubacka points to why this line of thinking is pointed in the wrong direction.

But I am more concerned about the “good” sounding takes that unfortunately also gloss over some real complexities and issues that will probably keep AI from being useful for the foreseeable future. Unfortunately, we also know that the untenable-ness of a solution usually doesn’t stop businesses and institutions from still going all in. Anyways, speaking of “all in,” let’s start with the first problematic take:

No One is Paying Attention to AI

Have you been on Twitter or watched the news recently? There are A LOT of people involved and engaged with it – mostly being ignored if we don’t tow the approved line by the “few entrepreneurs and engineers” that control its development. I just don’t see how anyone can take the “everyone is sticking their head in the sand” argument seriously any more. Some people get a bit more focused about how Higher Ed is not paying attention, but that is also not true.

So where are the statements and policies from upper level admin on AI? Well, there are some, but really – when has there been a statement or action about something that is discipline-specific? Oh, sorry – did you not know that there are several fields that don’t have written essay assignments? Whether they stick with standardized quizzes for everything, or are more focused on hands-on skills – or whatever it might be – please keep in mind that essays and papers aren’t as ubiquitous as they used to be (and even back in their heyday, probably still not a majority).

This just shows the myopic nature of so many futurist-lite takes: it affects my field, so we must get a statement from the President of the University on new polices and tools!

The AI Revolution is Causing Rare Societal Transformation

There are many versions of this, from the (mistaken) idea that for thousands of years “technologies that our ancestors used in their childhood were still central to their lives in their old age” to the (also mistaken) idea that we are in a rare societal upheaval on the level of the agricultural and industrial revolutions (which are actually by far not the only two). Just because we look down on agricultural or stone age innovations that happened every generation as “not cool enough,” it doesn’t mean they weren’t happening. Just find a History major and maybe talk with them about any Historical points, please?

Pretty much every generation for thousands of years have had new tools or ideas that changed over their lifetime, whether it was a new plowing tool or a better housing design or whatnot. And there have been more that two revolutions that have caused major transformation throughout human history. Again – find a person with a History degree or two and listen to them.

AI is Getting Better All the Time

GPT-3 was supposed to be the “better” that would begin to take over for humans, but when people started pointing out all of the problems with it (and everything it failed at), the same old AI company line came out: “it will only get better.”

It will improve – but will it actually get better? We will see.

I know that the AI hype is something that many people just started paying attention to over the past 5-10 years, but many of us have also been following for decades. And we have noticed that, while the outputs get more refined and polished, the underlying ability to think and perform cognitive tasks has not advanced beyond “nothing” since the beginning.

And usually you will hear people point out that a majority of AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next few decade. Well… it is complicated. Actual studies usually ask more tricky questions like “if there is a 50% change of developing human level AI” at some point in the future – and over half of the experts are saying there is a 50% chance of it happening in a time frame beyond most of our lifetimes (2060). Of course, your bias will affect how you read that statistic, but to me, having 50% of experts say there is a 50% chance of true human-level artificial intelligence in the far future… is not a real chance at all.

Part of the problem is the belief that more data will cross the divide into Artificial General Intelligence, but it is looking less and less likely that this is so. Remember – GPT-3 was supposed to be the miracle of what happens when we can get more data and computing speed into the mix… and it proved to fall short in many ways. And people are taking notice:

I covered some of this in my last post on AI as well. So is AI getting better all the time? Probably not in the ways you think it is. But the answer is complicated – and what matters more is asking who is gets better for and what purposes it gets better at. Grammar rules? Sure. Thinking more and more like humans? Probably not.

We Should Embrace AI, Not Fear It

It’s always interesting how so many of these conversations just deal with two extremes (“fully embrace” vs “stick head in the sand”), as if there are no options between the two. To me, I have yet to see a good case for why we have to embrace it (and laid out some concerns on that front in my last blog post). “Idea generation” is not really a good reason, since there have been thousands of websites for that for a long time.

But what if the fears are not unfounded? Autumm Caines does an excellent job of exploring some important ethical concerns with AI tools. AI is a data grab at the moment, and it is extracting free labor from every person that uses it. And beyond that, have you thought about what it can be used for as it improves? I mean the abusive side: deep fakes, fake revenge porn, you name it. The more students you send to it, the better it gets at all aspects of faking the world (and people that use it for nefarious purposes will give it more believable scripts to follow, as they will want to exercise more control. They just need the polished output to be better, something that is happening).

“But I don’t help those companies!!!” you might protest, not realizing where “those companies” get their AI updates from.

Educators Won’t Be Able to Detect AI Generated Assignments

I always find it interesting that some AI proponents speak about AI as if students usage will start to happen in the future – even though those of us that are teaching have been seeing AI-generated submissions for months. As I covered in a previous post, I have spoken to dozens and dozens of teachers in person and online that are noting how some students are turning in work with flawless grammar and spelling, but completely missing any cognition or adherence to assignment guidelines. AI submissions are here, and assignments with any level of complexity are easily exposing it.

However, due to studies like this one, proponents of AI are convinced teachers will have a hard time recognizing AI-generated work. But hold on a second – this study is “assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).” First of all, teacher usually are experts, and second of all, few assignments are as basic as stories, news, or recipes. Even games that test your ability to detect AI-generate content are frustratingly far away from most course assignment designs (and actually, not that hard to still detect if you try the test yourself).

People That Don’t Like AI Are Resistant Due to Science Fiction

Some have a problem believing in AI because it is used in Sci-Fi films and TV shows? Ugh. Okay, now find a sociologist. It is more likely that AI Cheerleaders believe that AI will change the world because of what they saw in movies, not the other way around.

I see this idea one trotted out every once and a while, and… just don’t listen to people that want to use this line of thinking.

AI is Advancing Because It Beats Humans at [insert game name here]

Critics have (correctly) been pointing out that AI being able to beat humans at Chess or other complex strategy games is not really that impressive at all. Closed systems with defined rules (no matter how complex) are not that hard to program in general – it just took time and increases in computing power. But even things like PHP and earlier programming languages could be programmed to beat humans with the same time and computing power.

Where Do We Go From Here?

Well, obviously, no one is listening to people like me that have to deal with AI-generated junk submissions in the here and now. You will never see any of us on an AI panel anywhere. I realize there is a big push now (which is really just a continuation of the same old “cheaters” push) to create software that can detect AI-generated content. Which will then be circumvented by students, a move that will in turn lead to more cheating detection surveillance. Another direction many will go is to join the movement way from written answers and more towards standardized tests. While there are many ways to create assignments that will still not be replicable in AI any time in the near future, probably too few will listen to the “better pedagogy” calls as usual. The AI hype will fade as it usually does (as Audrey Watters has taught us with her research), but the damage from that hype will still remain.

AI is a Grift

Okay, maybe I went too extreme with the title, but saying “whoever decided to keep calling it ‘Artificial Intelligence’ after it was clear that it wasn’t ‘intelligence’ was a grifter” was too long and clunky for a post title. Or even for a sentence really.

Or maybe I should say that what we currently call “Artificial Intelligence” is not intelligence, probably never will be, and should more accurately be called “Digital Automation.”

That would be more accurate. And to be honest, I would have no problem with people carrying on with (some of) the work they are currently doing in AI if it was more accurately labeled. Have you ever noticed that so many of the people hyping the glorious future of AI are people who’s jobs depend on AI work receiving continued funding?

Anyways, I see so many people saying “AI is here, we might as well use it. Its not really different than using a calculator, spell check, or Grammerly!” that I wanted to respond to these problematic statements.

First of all, saying that since the “tech is here, we might as well use it” is a weird idea, especially coming from so many critical educators. We have the technology to quickly and easily kill people as well (not just with guns, but drones and other advances). But we don’t just say “hey, guess we have to shoot people because the technology is here!” We usually evaluate the ethics of technology to see what should be done with it, and I keep scratching my head as to why so many people give AI a pass on the core critical questions.

Take, for instance, the The College Essay is Dead article published in the Atlantic. So many people breathlessly share that article as proof that the AI is here. The only problem is – the paragraph on “Learning Styles” in the beginning of the article (the one that the author would give a B+) is completely incorrect about learning styles. It gets everything wrong. If you were really grading it, it should get a zero. Hardly anyone talks about how learning styles are “shaped.” Learning Styles are most often seen as something people magically have, without much thought given to where they came from. When is the last time you saw an article on learning styles that mentioned anything about “the interactions among learning styles and environmental and personal factors,” or anyone at all referring to “the kinds of learning we experience”? What? Kinds of learning we experience?

The AI in this case had no clue what learning styles were, so it mixed what it could find from “learning” and the world of (fashion) “style” and then made sure the grammar and structure followed all the rules it was programmed with. Hardly anyone is noticing that the cognition is still non-existence because they are being wowed by flawless grammar.

Those of us (and there is a growing number of teachers noticing this and talking about it) that are getting AI-generated submissions in our classes are noticing this. We get submissions with perfect grammar and spelling – that completely fail the assignment. Grades are plummeting because students are believing the hype and turning to AI to relieve hectic lives/schedules/etc – and finding out the hard way the that the thought leaders are lying to them.

(For me, I have made it a habit of interacting with my students and turning assignments into conversations, so I have always asked students to re-work assignments that completely miss the instructions. Its just now more and more I am asking the students with no grammar or spelling mistakes to start over. Every time, unfortunately. And I am talking to more and more teachers all over the place that say the exact same thing.)

But anyways – back to the concerning arguments about AI. Some say it is a tool just like Grammerly or spellcheck. Well, on the technical side it definitely isn’t, but I believe they are referring to the usage side… which is also incorrect. You see, Grammarly and spellcheck replaced functions that no one ever really cared if they were outsourced – you could always ask a friend or roommate or who ever “how do you spell this word?” or “hey, can you proofread this paragraph and tell me if the grammar makes sense” and no one cared. That was considered part of the writing process. Asking the same person to write the essay for you? That has always been considered unethical in many cases.

And that is the weird part to me – as much as I have railed against the way we look at cheating here, the main idea behind cheating being wrong was that people wanted students to do the work themselves. I would still agree with that part – its not really “learning” if you just pay some one else to do it. Yet no one has really advanced a “list of ten ways cool profs will use AI in education” list that accounts for the fact that learners will be outsourcing part of the learning process to machines – and decades of research shows that skipping those parts is not good for actually learning things.

This is part of the problem with AI that the hype skips over – Using AI in education shortcuts the process of learning about thinking, writing, art creation, etc. to get to the end product quicker. Even when you use AI as part of the process to generate ideas or create rough drafts, it still removes important parts of the learning process – steps we know from research are important.

But again, educational thought leaders rarely listen to instructional designers or learning scientists. Just gotta get the retweets and grants.

At some point, the AI hype is going to smash head first into the wall of the teachers that hate cheating, where swapping out a contract cheating service for an AI bot is not going to make a difference to their beliefs. It was never about who or what provided the answers, but the fact that it wasn’t the student themselves that did it. Yes, that statement says a lot about Higher Ed. But also, it means your “AI is here, might as well accept it” hot take will find few allies in that arena.

But back (again) to the AI hype. The thought that AI adoption is like calculator adoption is a bit more complicated one, but still not very accurate. First of all, that calculator debate is still not settled for some. And second of all, it was never “free-for-all calculator usage!” People realized that once you become an adult, few people care who does the math as long as it gets done correctly. You can just ask anyone working with you something like “hey – what was the total budget for the lunch session?” and as long you got the correct number, no one cares if it came from you or a calculator or asking your co-worker to add it up. But people still need to learn how math works so they can know they are getting the correct answer (and using a calculator still requires you to know basic math in order to input the numbers correctly, unlike AI that claims to do that thinking for you). So students start off without calculators to learn the basics, and then calculators are gradually worked in as learners move on to more complex topics.

However, if you are asked to write a newsletter for your company – would people be okay with the AI writing it? The answer is probably more complicated, but in many cases they might not care as long as it is accurate (they might want a human editor to check it). But if you are an architect that had to submit a written proposal about how your ideas would be the best solution to a client’s problem, I don’t think that client would want an AI doing the proposal writing work for the architect. There are many times in life where your writing is about proving you know something, and no one would accept that as an AI generated action.

Yes, of course there is a lot that can be said about instructors that create bad assignments and how education has relied too much on bad essays to prove learning when they don’t do that, but sorry AI-hypsters: you don’t get credit for just now figuring that one out when many, many of us in the ID, LS, and other related fields have been trying to raise the alarm on that for decades. You are right, but we see how (most of) you are just now taking that one seriously… just because it strengthens your hype.

Many people are also missing what tools like ChatGPT are designed to do as well. I see so many lists of “how to use ChatGPT in education” that talk about using it as a tool to help generate ideas and answers to questions. However, it is actually a tool that aims to create final answers, essays, paintings, etc. We have had tools to generate ideas and look for answers for decades now – ChatGPT just spits out what those tools have always done when asked (but not always that accurately, as many have noted). The goal of many of the people that are creating these AI tools is to replace humans for certain tasks – not just give ideas. AI has been failing at that for decades, and is not really getting anywhere closer no matter how much you want to believe the “it will only get better” hype that AI companies want you to believe.

(It has to get good at first before it can start getting better FYI.)

This leads back to my first assertion – that there is no such thing as “artificial intelligence.” You see, what we call AI doesn’t have “intelligence” – not the kind that can be “trained” or “get better” – and it is becoming more apparent to many that it never will. Take this article about how AI’s True Goal May No Longer Be Intelligence for instance. It looks at how most AI research has given up on developing true Artificial General Intelligence, and wants you to forget that they never achieved it. What that means is that through the years, some things like machine learning and data analytics were re-branded as “intelligence” because AI is easier to sell to some people than those terms. But it was never intelligent, at least not in the ways that AI companies like to refer to it as.

Sorry – we never achieved true AI, and what we call AI today is mislabeled.

“As a result, the industrialization of AI is shifting the focus from intelligence to achievement” Ray writes in the article above. This is important to note according to Yann LeCun, chief AI scientist at Facebook owner Meta Properties, who “expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as ‘true’ intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense.” Does that last part sound familiar? Those promoting a coming “Age of AI” are hinting that AI can now plan things for us, like class lessons and college papers.

(And yes, I can’t help but notice that the “Coming Age of AI” hype sounds just like the “Coming Age of Second Life” and “Coming Age of Google Wave” and so on hype. Remember how those were going to disrupt Higher Ed? Remember all the hand-wringing 5-10 years ago over how institutional leaders were not calling emergency meetings about the impact of learning analytics? I do.)

But why does “true” intelligence matter? Well, because AI companies are still selling us the idea that it will have “true” intelligence – because it will “only get better.” Back to the ZDNet article: “LeCun expresses an engineer’s concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do.”

And it only gets worse: “Industrial AI professionals don’t want to ask hard questions, they merely want things to run smoothly…. Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.”

In other words, they are getting wowed by the perfect grammar and while failing to notice that there is no cognition there, like there hasn’t been for decades.

Decades? What I mean by that is, if you look at the history section of the Artificial General Intelligence article on Wikipedia, you will see that the hype over AI has been coming and going for decades. Even this article points out that we still don’t have AI: “In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.” Terms like narrow AI, weak AI, industrial AI, and others were just marketing terms to expand the umbrella of AI to include things that did not have “intelligence,” but could be programmed to mimic human activities. The term “AI,” for now, is just a marketing gimmick. It started out in the 1950s and still has not made any real advances in cognition. It will only get better at fooling people into thinking it is “intelligent.”

A lot is being said now about how AI is going to make writing, painting, etc “obsolete.” The vinyl record would like to have a word with you about that, but even if you were to believe the hype… there is more to consider. Whenever people proclaim that something is “obsolete,” always ask “for who?” In the case of emerging technologies, the answer is almost always “for the rich elite.” Technological advances often only remain free or low cost as long as they need guinea pigs to complete it.

If you are pushing students to use a new technology now because it is free, then you are just hastening the day that they will no longer be able to afford to use it. Not to mention the day it will be weaponized against them. If you don’t believe me, look at the history of Ed-Tech. How many tools started off as free or low cost, got added to curriculum all over the place, and then had fees added or increased? How many Ed-Tech tools have proven to be harmful to the most vulnerable? Do you think AI is going to actually plot a different course than the tech before it?

No, you see – AI is often being created by certain people who have always been able to pay other people to do their work. It was just messy to deal with underpaid labor, so AI is looked at behind the scenes as a way to ditch the messy human part of exploitation. All of the ways you are recommending that ChatGPT be used in courses is just giving them free labor to perfect the tool (at least to their liking). Once they are done with that, do you really think it will stay free or low cost? Give me a break if you are gullible enough to believe it will.

Mark my words: at some point, if the “Age of AI” hype takes off, millions of teachers are suddenly going to have to shift lesson plans once their favorite AI tool is no longer free or is just shut down. Or tons of students that don’t have empathetic teachers might have to sell another organ to afford class AI requirements.

(You might notice how all of the “thoughtful  integration of AI in the classroom” articles are by people that would benefit greatly, directly or indirectly, from hundreds or thousands of more students using ChatGPT.)

On top of all of that – even if the hype does prove true, at this point AI is still untested and unproven. Why are so many people so quick to push an untested solution on their students? I even saw someone promoting how students could use ChatGPT for counseling services. WTF?!?!? That is just abusive to recommend students who need counseling to go to unproven technology, no matter how short staffed your Mental Health Services are.

But wait – there is more! People have also been raising concerns (rightly so) about how AI can be used for intentionally harmful purposes, like everything from faking revenge porn to stock market manipulation. I know most AI evangelists also express concern about this as well, but do they realize that every usage of AI for experimentation or course work will increase the ability of people to intentionally use AI for harm? I have played around with AI as much as anyone else, but I stopped because I realized that if you are using it, you are adding to the problem by helping to refine the algorythms. What does that mean to unleash classes full of students on to it for free labor to improve it?

To be clear here, I have always been a fan of AI generated art, music, and literature as a sub-genre itself. The weird and trippy things that AI has produced in the past are a fascinating look at what happens when you only have a fraction of the trillions of constructs and social understandings we have in our minds. But to treat it as a technology that is fully ready to unleash on our students without much critical thought about how dangerous that could be?

Also, have you noticed that almost every AI ethics panel out there is made entirely of pro-AI people and no skeptics? I am a fan of the work of some of these “pro-AI people,” but where is the true critical balance?

I get that we can’t just ignore AI and do nothing, but there is still a lot of ground between “sticking our heads in the sand” and “full embrace”. For now, like many instructors, I tell students just not to use it if they want to pass. I’m not sure I see that changing any time soon. Most of the good ideas on all of those lists and articles of “how to integrate AI into the classroom” are just basic teaching designs that most teachers think of in their sleep. Bur what damage is going to be done by institutional leaders and company owners rushing in to implement unproven technology? That may “disrupt” things in ways that we don’t want to see come to fruition.