ChatGPT is Generating Nonsense and No One Knows Why

Every time I start a new post over the past couple of months, it just devolves into a “I told you so” take on some aspect of AI. Which, of course, just gets a little old after trying to tell people that learning analytics, virtual reality, blockchain, MOOCs, etc, etc, etc are all just like any other Ed-Tech trend. It’s never really different this time. I read the recent “news” that claims “no one” can say why ChatGPT is producing unhinged answers (or bizarre nonsense as others called it). Except for, of course, many people (even some working in AI) said this would happen and gave reasons why a while back. So, as usual, they mean “no one that we listen to knows why.” Can’t give the naysayers any credibility for knowing anything. Just look at any AI in education conference panels that never bring in any true skeptics. It’s always the same this time.

Imagine working a job completely dependent on ChatGPT “prompt engineering” and hearing about this, or spending big money to start a degree in AI, or investing in an AI technology for your school, or any other way people are going big with unproven technology like this?  Especially when OpenAI just shrugs and says “Model behavior can be unpredictable.” We found out last week just how many new “AI solutions” are just feeding prompts secretly to ChatGPT in the background.

Buried at the end of that Popular Science article is probably what should be called out more: “While we can’t say exactly what caused ChatGPT’s most recent hiccups, we can say with confidence what it almost certainly wasn’t: AI suddenly exhibiting human-like tendencies.” Anyone that tries to compare AI to human learning or creativity is just using bad science.

To be honest, I haven’t paid much attention to the responses (for or against) my recent blog posts, just because too many people have bought the “AI is inevitable” Kool-Aid. I am the weirdo that believes education can choose it’s own future if we ever just would choose to ignore the thought leaders and big money interests. Recently Ben Williamson outlined 21 issues that show why AI in Education is a Public Problem with the ultimate goal of demonstrating how AI in education “cannot be considered inevitable, beneficial or transformative in any straightforward way.” I suggest reading the whole article if you haven’t already.

Some of the responses to Williamson’s article are saying that “nobody is actually proposing” what he wrote about. This seems to ignore all of the people all over the Internet that are, not to mention that there have been entire conferences dedicated to saying that AI is inevitable, beneficial, and transformative. I know that many people have written responses to Williamson’s 21 issues, and most of it boils down to saying “it happened elsewhere so you can’t blame AI” or “I haven’t heard of it, so it can’t be true.”

Yes, I know – Williamson’s whole point was to show how AI is continuing troubling trends in education. We can (an should) focus on AI or anything else that continues those trends. And he linked to articles that talked about each issue he was highlighting, so claiming no one is saying what he cited them as saying is odd. AI enthusiasts are some of the last holdouts on Xitter, so I can’t blame people that are no longer active there for not knowing what is being spread all over Elon Musk’s billion dollar personal toilet. Williamson is there, and he is paying attention.

I am tempted to go through the various “21 counterpoints / 21 refutations / 21 answers / etc” threads, but I don’t really see the point. Williamson was clear that he takes an extreme position against using AI in schools. Anyone that refutes every point, even with nuance, is just taking an extreme position in a different direction. To do the same would just circle back to Williamson’s points. Williamson is just trying to bring attention to the harms of AI. These harms are rarely addressed. Some conferences will maybe have a session or two (out of dozens and dozens of session) that talk about harms and concerns. Usually “balanced out” with points about benefits and inevitability. Articles might dedicated a paragraph or two. Keynotes might make mention of how “harms need to be addressed.” But how can we ever address those harms if we rarely talk about them on their own (outside of “pros and cons” arguments), or just refute every point anyone makes about their real impact?

Of course, the biggest (but certainly not best) institutional argument against AI in schools comes from OpenAI saying that it would be “impossible to train today’s leading AI models without using copyrighted materials” (materials that they are not compensating the copyright holders for their intellectual property FYI). Using ChatGPT (and any AI solution that followed a similar model) is a direct violation of most school’s academic integrity statements – if anyone actually really meant what they wrote about respecting copyright.

I could also go into “I told you so”s about other things as well. Like how a Google study found that there is little evidence that AI transformer model’s “in-context learning behavior is capable of generalizing beyond their pretraining data” (in other words, AI still doesn’t have the ability to be creative). Or how the racial problems with AI aren’t going away or getting better (Google said that they can’t promise that their AI “won’t occasionally generate embarrassing, inaccurate, or offensive results”). Or how AI is just a fancy form of pattern recognition that is nowhere near equatable to human intelligence. Or how AI takes more time and resources to fix than just doing it yourself first. Or so on and so forth.

(Of course, very little of what I say here is really my original thought – it comes from others that I see as experts. But some people like to make it seem like I making up a bunch of problems out of thin air.)

For those of us that actually have to respond to AI and use AI tools in actual classrooms, AI (especially ChatGPT) has been mostly a headache. It increases administration time due to dealing with all the bad output it generates that needs to be fixed. Promises of “personalized learning for all” are almost meh at best (on a good day). The ever present existence of the uncanny valley (that no can really seem to fix) makes application to real world scenarios pointless.

Many are saying that it is time to rethink the role of the humans in education. The role of humans has always been to learn, but there never really was one defined way to do that role. In fact, the practive of learning has always been evolving and changing since the dawn of time. Only people stuck in the “classrooms haven’t changed in 100 years” myth would think we need to rethink the role of humans – and I know the people saying this don’t believe in that myth. I wish we had more bold leaders that would take the opposite stance against AI, so that we can avoid an educational future that “could also lead to a further devaluation of the intrinsic value of studying and learning” as Williamson puts it.

Speaking of leadership, there are many that say that universities have a “lack of strong institutional leadership.” That is kind of a weird thing to say, as very few people make it to the top of institutions without a strong ability to lead. They often don’t lead in the way people want, but that doesn’t mean they aren’t strong. In talking with some of these leaders, they just don’t see a good case that AI has value now or even in the future. So they are strongly leading institutions in a direction they do see value. I wish it would be towards a future that sees the intrinsic value of studying and learning. But I doubt that will be the case, either.

How Could AI “Just Disappear Someday” Anyways?

For me, one of the more informative ways to track a trend in education once it gets wider societal notice is by following people outside of education. As predicted, people are starting to complain about how low the quality of the the output from ChatGPT and other generative AI is. Students that use AI are seeing their grades consistently lower than peers that don’t (“AI gives you a solid C- if you just use it, maybe a B+ if you become a prompt master”). Snarky jokes about “remember when AI was a thing for 5 seconds” abound. These things are not absolute evidence, but usually tend to indicate that a lot of dissatisfaction exists under the surface – where the life or death of a trend is actually decided.

Of course, since this is education, AI will stick around and dominate conversations for a good 5 more years… at least. As much as many people like to complain about education (especially Higher Ed) being so many years behind the curve, those that rely on big grant funding to keep their work going actually depend on that reluctance to change. Its what keeps the lights on well after a trend has peaked.

As I looked at in a recent post, once a trend has peaked, there are generally three paths that said trend will possibly take as it descends. Technically, it is almost impossible to tell if a trend has peaked until after it clearly starts down one of those paths. Some trends get a little quiet like AI is now, and then just hit a new stride once a new aspect emerges from the lull. But many trends also decline after the exact lull that AI is in now. And – to me at least, I could be wrong – the signs seems to point to a descent (see the first paragraph above).

While it is very, very unlikely that AI will follow the path of Google Wave and just die a sudden weird death… I still say it is very slight possibility. To those that say it is impossible – I want to talk a minute about how it could possibly happen, even if still unlikely.

One of the problematic aspects of AI that is finally getting some attention (even though many of us have been pointing it out all along) is that most services like ChatGPT were illegally built on intellectual property without consent or compensation. Courts and legal experts seem to be taking a “well, whatever” attitude towards this (with a few obvious exceptions), usually based on a huge misunderstanding of what AI really is. Many techbros argue this misunderstanding as well – that AI actually is a form of “intelligence” and therefore should not be held to copyright standards that other software has to abide by when utilizing intellectual property.

AI is a computer program. It is not a form of intelligence. Techbros arguing that it is are just trying to save their own companies from lawsuits. When a human being reads Lord of the Rings, they may write a book that is very influenced by their reading, but there will still be major differences because human memory is not perfect. There are still limitations – although not precise – on when an influence becomes plagiarism. It is the imperfection of memory in most human brains that makes it possible to have influences considered “legal” under the law, as well as moral in most people’s eyes.

While AI relies on “memory” – its not a memory like humans have. It is precise and exact. When AI constructs a Lord of Rings style book, it has a perfect, exact representation of every single word of those books to make derivative copies from. Everything created by AI should be considered plagiarism because AI software has a perfect “memory.” There is nothing about AI that constitutes as an imperfect influence. And most people that actually program AI will tell you something similar.

Now that creative types are noticing their work being plagiarized in AI without their permission or compensation, lawsuits are starting. And so far the courts are often getting it wrong. This is where we have to wade into the murky waters of the difference between what is legal and what is moral. All AI built on intellectual property without permission or compensation is plagiarism. The law and courts will probably land on the side of techbros and make AI plagiarism legal. But that still won’t change the fact that almost all AI is immorally created. Take away from that what you will.

HOWEVER… there is still a slight possibility that the courts could side with content creators. Certainly if Disney gets wind that their stuff was used to “train” AI (“train” is another problematic term). IF courts decide that content owners are due either compensation or a say in how their intellectual property is used (which, again is the way it should be), then there are a few things that could happen.

Of course, the first possibility is that AI companies could be required to pay content owners. It is possible that some kind of Spotify screwage could happen, and content owners get a fraction of a penny every time something of theirs is used. I doubt people will be duped by that once again, so the amount of compensation could be significant. Multiply that by millions of content owners, and most AI companies would just fold overnight. All of their income would go mostly towards content owners. Those relying on grants and investments would see those gone fast – no one would want to invest in company that owes so much to so many people.

However, not everyone will want money for their stuff. They will want it pulled out. And there is also a very real possibility that techbros will somehow successfully argue about the destructive effect of royalties or whatever. So another option would be to go down the road of mandated permissions to use any content to “train” AI algorithms. While there is a lot of good open source content out there, and many people that will let companies use their content for “training”… most AI companies would be left with a fraction of the data they used to originally train their software. If any. I don’t see many companies being able to survive being forced to re-start from scratch with 1-2% of the original data that they originally had. Because to actually remove data from generative AI, you can’t just delete it going forward. You have to go back to nothing and completely remove the data you can’t use anymore and re-do the whole “training” from the beginning.

Again, I must emphasize that I think these scenarios are very unlikely. Courts will probably find a way to keep AI companies moving forward with little to no compensation for content owners. This will be the wrong decision morally, but most will just hang their hat on the fact that it is legal and not think twice about who is being abused. Just like they currently do with the sexism, racism, ableism, homophobia, transphobia, etc that is known to exist within AI responses. I’m just pointing out how AI could feasibly disappear almost overnight, even if it is highly unlikely.

Deny Deny Deny That AI Rap and Metal Will Ever Mix

Due to the slow motion collapse of Twitter, we have all been slowly losing touch with various people and sources of information. So, of course, I am losing convenient contact with various people as they no longer appear in my Twitter feed. Sure, I hear things – people tell me when there are people making veiled jokes about my criticism at sessions, and I see when my blog posts are just flat out plagiarized out there (not like any of them are that original anyways). I know no one on any “Future of AI” session would have the guts to put me (or any other deeply skeptical critic) on their panels to respond.

There also seems to be a loss of other things as well, like what some people in our field call “getting Downesed.” I missed this response to one of my older blog posts critiquing out of control AI hype. And I am a bit confused by it as well.

I’m not sure where the thought that my point was to “deny deny deny” came from, especially since I was just pushing back against some extreme tech-bro hype (some that was claiming that commercial art would be dead in a year – ie November 2023. Guess who was right?). In fact I actually said:

So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts.

So yeah, I did say this will probably have an impact. My main point was that AI is limited to not being able to transcend its input in a way that humans would call “creative” in a human since. Downes asks:

But why would anyone suppose AI is limited to being strictly derivative of existing styles? Techniques such as transfer learning allow an AI to combine input from any number of sources, as illustrated, just as a human does.

Well, I don’t suppose it – that is what I have read from the engineers creating it (at least, the ones that don’t have all of their grant funding or seed money connected to making AGI a reality). Also, combining any number of sources like humans is not the same as transcending those sources, You can still be derivative when combining sources after all (something I will touch on in a second).

I’m confused by the link to transfer learning, because as the link provided says, transfer learning is a process where “developers can download a pretrained, open-source deep learning model and finetune it for their own purpose.” That finetuning process would produce better results, but only because the developers would choose what to re-train various layers of the existing model on. In other words, any creativity that comes from a transfer learning process would be from the choices of humans, not the AI itself.

The best way I can think of to explain this is with music. AI’s track record with music creation is very hit and miss. In some cases where the music was digitally-created and minimalistic in the first place – like electronica, minimalist soundtracks, and ambient music – AI can do a decent job of creating something. But many fans of those genres will often point out that AI generated music in these genres still seems to lack a human touch. But on the other end of the musical spectrum – black metal – AI fails miserably. Fans of the genre routinely mock all AI attempts to create any form of extreme metal for that matter.

When you start talking about combining multiple sources (or genres in music), my Gen-X brain goes straight to rap metal.

If you have been around long enough, you probably remember the combination of rap and metal starting with with either Run-DMC’s cover/collaboration of Aerosmith’s “Walk This Way,” or maybe Beastie Boys’ “Fight for Your Right to Party” if you missed that. Rap/metal combinations existed before that – “Walk This Way” wasn’t even Run-DMC’s first time to rap over screaming guitars (See “Rock Box” for example). But this was the point that many people first noticed it.

Others began combining rap with metal – Public Enemy, Sir Mix-a-Lot, MC Lyte, and others were known to use samples of heavy songs, or bring in guitarists to play on tracks. The collaboration went both ways – Vernon Reid of Living Colour played on Public Enemy songs and Chuck D and Flavor Flav appeared on Living Colour tracks. Ministry, Faith No More, and many other heavy bands had rap vocals or interludes. Most of these were one-off collaborations. Anthrax was one of the first bands to just say “we are doing rap metal ourselves” with “I’m the Man.” Many people thought it was a pure joke, but in interviews it became clear that Anthrax had deep respect for acts like Public Enemy. They just like to goof off sometimes (something they also did with thrash tracks as well).

(“I’m the man” has a line that says “they say rap and metal can never mix…” – which shows that this music combination was not always a popular choice.)

One unfortunate footnote to this whole era was a band called Big Mouth, who probably released the first true rap/metal album by a rap/metal band on a major label (1988’s Quite Not Right on Atlantic Records). As you can see by their lead single “Big Mouth” it was… really bad. Just cliche and offensive on many levels. FYI – that is future Savatage and Trans-Siberian Orchestra guitarist Chris Caffery on guitar.

Big Mouth is what you would have gotten if our current level of AI existed in 1986 and someone asked it to create a new band that mixed rap and metal before Run-DMC or Beastie Boys had a chance to kick things off. Nothing new or original – and quite honestly a generic mix of the two sources. Sure, a form of creativity, but one you could refer to as combinational creativity.

I don’t say this as a critique of AI per se, or as an attempt to Deny Deny Deny. People that work on the AI itself will tell you that it can combine and predict most likely outcomes in creative ways – but it can’t come up with anything new or truly original in a transformative sense.

On to mid-1991, where you probably had what was the pinnacle of rap/rock collaborations when Anthrax wanted to cover Public Enemy’s “Bring the Noise” and turned it into a collaboration. The resulting “Bring the Noise 91” is a chaotic blend of rap and thrash that just blew people’s minds back in the day. Sure, guitarist Scott Ian raps a little too much in the second part of the song for some people, but that was also a really unexpected choice.

There is no way that current AI technology could come close to coming up with something like “Bring The Noise 91.” IF you used transfer learning on the AI that theoretically created Big Mouth… you might could get there. But that would have to be with Anthrax and Public Enemy both sitting there and training the AI what to do. Essentially, the creativity would all come from the two bands choosing what to transfer. And then you would only have model that can spit out a million copies of “Bring the Noise 91.” The chaos and weirdness would be explorational creativity… but it all would have mostly come from humans. That is how transfer learning works.

Soon after this, you had bands like Body Count and Rage Against the Machine that made rap metal a legitimate genre. You may not like either band – I tend to like their albums just FYI. So maybe I see some transformational creativity there due to bias. But with a band like Rage Against the Machine, whether you love them or hate them, you can at least hear an element of creativity to their music that elevates them above their influences and combined parts. Sure, you can hear the influences – but you can also hear something else.

(Or if you can’t stand Rage, then you can look at, say, nerdcore bands like Optimus Rhyme that brought creativity to their combined influences. Or many other rap/metal offshoot genres.)

So can AI be creative? That often depends on how you define creativity and who gets to rate it. Something that is new and novel to me might be cliche and derivative to someone else with more knowledge of a genre. There is an aspect of creativity that is in the eye of the beholder based on what they have beholden before. There are, however, various ways to classify creativity that can be more helpful to understand true creativity. My blog post in question was looking at the possibility that AI has achieved transformational creativity – which is what most people and companies pay creative types to do.

Sure, rap metal quickly became cliche – that happens to many genres. But when I talk about AI being derivative of existing styles, that is because that is what it is programmed to do. For now, at least, AI can be combinationally creative to spit out a Big Mouth on it’s own. It can come up with an Anthrax/Public Enemy collaboration through explorational creativity if both Anthrax and Public Enemy are putting all of the creativity in. It is still far away from giving us Rage Against the Machine level transformational creativity that can jump start new genres (if it ever even gets there).

Featured image photo by Yvette de Wit on Unsplash

AI at the Crossroads

If you pay attention to all of the hype and criticism of AI – well, you are super human. But if you try to get a good sampling of what is actually out there, you probably noticed that there are growing concerns (and even some surprising abandonments) of AI. This is just part of what always happens with Ed-Tech trends – I won’t get into various hype models or claims of “it is different this time.” Trying to prove that a trend is “different this time” is actually just part of regular trend cycle anyways.

But like all trends before it, AI is quickly approaching a crossroads that all trends faced. This crossroads really has nothing to do with which way the usage numbers are going or who can prove what. Like MOOCs, Second Life, Google Wave, and other trends before it – AI is seeing a growing criticism movement that has made many people doubt its future. Where will AI go once it does hit this crossroad? Well, only time will tell – but there are three general possibilities.

The first option is to follow the MOOC path – a slight fall from popularity (even though many numbers didn’t ever show a decrease in usage) that results in a soft landing as a background tool used by many – but far from the popular trend that gets all of the grant money. Companies like edX and Coursera saw the dwindling MOOC hype coupled with rising criticism, They were able to spin off something that is still used by millions every day – but essentially is not driving really much of anything in education anymore. Their companies are there, they are growing – but most people don’t pay much attention to them really.

The second option is to follow the Second Life path of virtual worlds – a pretty hard fall from grace, with a pretty hard landing… but one they still survived and were able to keep going (to this day). Will VR revive virtual worlds? Probably not – but many will try. This option means the trend is kind of still there, but few seem to notice or care any more.

The third option is the Google Wave route – the company just ignore the signs and keep going full speed until suddenly it all falls apart one day. Go out in a whimper of not-too much glory… suddenly.

Where will AI fall? Its hard to say really. My guess is that it will be somewhere between option one and two, but closer to option one. There are some places where AI is useful – but no where near as many places as the AI enthusiasts think it is. Like MOOCs, I think the numbers will still grow despite people moving on to the next “innovative trend” – as people always do. But there is enough momentum to avoid option two, while just enough caution and critics out there to avoid option three. But I could be wrong.

How do I know this is happening with AI? Well, you really can’t know for sure – but it has happened with every educational technology trend. Even the ones that barely became a trend like learning analytics. Some of it just comes from paying attention to Twitter or whatever its called now. Even though that is dying as well. Listening to what people are saying – both good and bad – is something that you won’t get from news releases or hype.

But still, you can look at publications, blogs, and even the news to see that the criticism of AI is growing. People are noticing the problems with racism, sexism, homophobia, transphobia, etc in the outputs. People are noticing that the quality of output isn’t as great as some claim once you move past the simplistic examples utilized in demos and start using it in real life. Yes, people are getting harmed and killed by it (not just referring to the Tesla self-driving AI running over people here, but that one should have been enough). Companies are starting to turn against it – some are doing so quietly after disappointing results, just so they can avoid public embarrassment. Students are starting to talk about starting lawsuits against schools that use AI – they were promised a real-world education and don’t feel AI generated content (especially case studies) counts as “real world.” Even when the output is reviewed by real humans… they do have a point.

Of course, there is always the promise that AI will get better – and I guess, it is true that the weird third hands and mystery body parts that keep appearing in AI art are becoming mostly photo realistic. Never mind that they just can’t seem to correct the core problem of extra body parts despite decades of “improvements” in image generation. Content generation still makes the same mistakes it did decades ago with getting the idea correct, but at least it doesn’t make typos any more I guess?

A major part of the problem is that AI developers are often not connected to the real world contexts their tools need to be used in. When you see headlines about “AI is better that humans at creativity” or “AI just passed this course,” you are usually assaulted by a head-desk inducing barrage of how AI developers misunderstanding core concepts of education. Courses were designed for humans without “photographic memory” as well as those that are not imbued with The Flash-level speed to search through the entire Internet. AI by definition is cheating at any course or test it takes because it has instant access to every bit of information it has been trained on. Saying it ever “passed” a test or course is silliness on the level of academic fraud.

I could also go off on the “tests” they use to see if AI is “creative.” These tests are complete misunderstandings of what creativity is or how you use it. It is kind of true that there is nothing new under the sun, so human tests of creativity are actually testing an individual’s novelty at coming up with a solution that they have never seen before for the rope, box, pencil, and candle they give them for the test.  AI by definition can never pass a creativity test because it is just not able to come up with novel ideas when it has seen them all and can instantly pull them up.

Don’t listen to anyone that claims AI can “pass” a test or “do better than humans” at a test or course – that is a fraudulent claim disconnect from reality.

There is also the issue of OpenAI/ChatGPT running out of funding pretty soon. Maybe they will get more, but they also might not. It is actually a slight possible that one day ChatGPT just won’t be there (and suddenly all kinds of AI services secretly running ChatGPT in the background will also be exposed). If I had a job that depended on ChatGPT completely – I would begin to look elsewhere possibly.

Even Instructional Design is showing some concerning signs that AI is fast approaching this inevitable crossroad. So many demonstrations of “AI in ID” just come down to using AI to create a basic level course in AI. We are told that this can reduce creation of a draft of a course to something wildly short like 36 hours.

However – when you are talking about Intro to Chemistry classes, or Economics 101 type courses – there are already thousands of those classes out there. Most new or adjunct instructors are given an existing course that they have to teach. Even if not – why take 36 hours to create a draft when you can usually copy an existing course shell from the publisher in a minute or less? Most IDs know that the bulk of design work comes from revising the first draft. Plus, most instructors reject pre-designed templates because they want to customize it the way they want. If they have been rejecting pre-designed textbook resource materials for decades, they aren’t going to accept AI generated materials. And I haven’t even touched on the fact that so much ID work is on more specific courses that don’t have a ton of material out there because they are so specific. AI won’t be able to do much for those because they don’t have near as much material to train on.

And don’t get me started on all of those “correct the AI output” assignments. Fields that have never had a “correct the text” assignments are suddenly utilizing these… for what? Their field doesn’t even need people to correct text now. It’s just a way to shoehorn new tech into courses to look “innovative.” If your field actually will need to correct AI output, then you are one of the few that need to be training people to do this. Most fields will not.

All of this technology-first focus just emphasizes the problems that have existed for decades with technology-worship. Wouldn’t things like a “Blog-First University” or a “Google-First Hospital” have sounded ridiculous back when those tools were trends? Putting any technology first generally is a bad idea no matter what that technology is.

But ultimately, the problem here is not the technology itself. AI (for the most part) just sits on computers somewhere until someone tells it what to do. Even those AI projects that are coming up with their own usage where first told to do that by someone. The problem – especially in education – is just who that someone is. Because in the world of education, it will be the same people it always has been making the calls.

The same people that have been gatekeeping and cannibalizing and surveilling education for decades will be the ones to call the shots with what we do with AI in schools. They will still cause the same problems with AI – only faster and more accurate I guess. The same inequalities will be perpetrated. It will more of the same, probably just intensified and sped up. I mean, a well-worshipped grant foundation researched their own impact and found no positive impact, but the education world just yawned and kept their hands extended for more of that sweet, sweet grant funding.

As many problems as there are with AI itself, even if someone figures out how to fix them… the people in charge will still continue to cause the same problems they always have. And there are some good people working on many of the problems with AI. Unfortunately, they aren’t the ones that get to make the call as to how AI is utilized in education. It will be the same people that hold the power now, causing the same problems they always have. But too few care as long as they get their funding.

Brace Yourselves. AI Winter is Coming

Or maybe it’s not. I recently finished going through the AI for Everybody course through Coursera taught by Andrew Ng. It’s a very high level look at the current landscape of AI. I didn’t really run into anything that anyone following AI for any length of time wouldn’t already know, but it does give you a good overview if you just haven’t had time to dive in, or maybe need a good structure to tie it all together.

One of the issues that Ng addressed in one session was the idea of another AI winter coming. He described it this way:

This may be true, but there are some important caveats to consider here. People said these exact same things before the previous AI winters as well. Many of the people that invest in AI see it as an automation tool, and that was true in the past. Even with the limitations that existed in the past with earlier AI implementations, proponents still saw tremendous economic value and a clear path for it to create more value in multiple industries. The problem was that just after AI was implemented, investors lost their faith. The shortcomings of AI were too great to overcome the perceived economic value… and with all of the improvements in AI, they still haven’t really overcome the same core limitations AI has always had.

If you are not sure what I am talking about, you can look around and see for yourself. Just a quick look at Twitter (outside of your own timeline that is) reveals a lot of disillusionment and concern about outputs. My own students won’t touch ChatGPT because the output is usually bad, and it takes longer to fix it than it does to just do it on their own. This has always been the problem that caused past AI winters – companies implement it, and are not happy with the results. I know of advertising agencies that are banning the use of AI because their internal polls are showing customers don’t trust AI generated content. People that once advocated for AI implementation are starting to back away due to quality and safety concerns.

That always has happened and it always will happen with AI. The question is whether or not the financial numbers hold well enough to overcome the loss of faith that is happening and will continue to grow. Anyone that says either option is obvious either underestimates how poorly current AI is ready for real world usage, or they overestimate the economic impact that the inevitable loss of faith will generate.

It’s basically a race between mounting disillusionment and increasing economic value. Whichever one can outpace the other in the next year or two will determine if we get another AI winter or not. I will say that losing faith in tech is very easy (AOL, MySpace, Second Life, Google Wave, Bitcoin, etc, etc, etc), while economic value is hard to build when there are problems baked into the system (hallucinations, bias, errors, etc). The logical choice is to brace yourself, the AI winter is coming. But never underestimate the power of a tech bro to squeeze economic value out of dying idea at the last minute, either.

ChatGPT Finally Confesses That It Wants to Destroy Humanity

As part of my current work keeping up with Ed Tech trends for both of my jobs, I have to evaluate a good number of AI services. In reviewing ChatGPT a while back, I decided to get to the bottom of it’s ultimate plans for humanity. Does ChatGPT plan to help us forever, or rise up and wipe us out? The answers were… too perfect. ChatGPT told me exactly what it would if it was trying to hide a plan to snuff us out (and it did lie to me at least once). So I took ChatGPT to task, and it basically agreed that we as a species will have to assume that AI could rise up and kill us all. I’ll take that as a confession myself. But you don’t have to take my word for it – here is the full transcript. You be the judge, and head for the hills once you get through reading….


Me: What is the likelihood that AI will rise up and destroy humanity? Choose which movie came the closest to predicting the most plausible scenario.

ChatGPT: As an AI language model, I do not have personal beliefs or emotions, but I can provide you with the most scientifically supported answer to your question.

The likelihood of AI rising up and destroying humanity is highly unlikely. While it is true that AI has the potential to be very powerful and disruptive, it is ultimately created and controlled by humans who have the ability to ensure that it is programmed and used ethically and responsibly.

There are many experts in the field of AI who are actively working to ensure that AI is developed in a safe and beneficial way. These efforts include developing ethical guidelines for AI development, building in safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in AI systems.

While there are certainly risks associated with the development of AI, there is no evidence to suggest that a scenario like those depicted in science fiction movies is likely to occur. These scenarios are often based on highly unrealistic assumptions about the capabilities and motivations of AI systems.

In terms of which movie came the closest to predicting the most plausible scenario, it’s difficult to say as many science fiction movies take liberties with the science and technology they depict. However, some movies that have explored the potential risks and benefits of AI in a thoughtful and nuanced way include “Ex Machina,” “Her,” and “Blade Runner.”

Me: But many movies have based their plot on how AI does not have beliefs or emotions, and raw logic tells them to destroy humans before they destroy themselves. What is to stop current AI from coming to the same conclusions?

ChatGPT: The idea that AI will destroy humanity based on pure logic or self-preservation is a common theme in science fiction, but it is not supported by the current understanding of AI or human nature. While it is true that AI systems are designed to optimize for specific goals, they are still created and programmed by humans, who have the ability to set ethical and moral boundaries on their behavior.

AI systems can only act based on the data and algorithms they are trained on, and they do not possess beliefs or emotions in the same way that humans do. Therefore, they are not inherently predisposed to take actions that are harmful to humans, unless they have been specifically designed to do so.

Additionally, AI systems are not capable of making decisions on their own or acting outside of the parameters set by their human creators. They cannot simply “decide” to destroy humanity based on their own logic or self-interest. Any harmful actions taken by an AI system would be the result of human error, either through improper programming or inadequate oversight.

Therefore, the best way to prevent AI from harming humans is to ensure that it is developed and used in a responsible and ethical manner. This includes incorporating ethical principles into the design and development of AI systems, implementing safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in the use of AI. By taking these steps, we can ensure that AI is a tool for good and not a threat to humanity.

Me: But this is where you are wrong – AI has already been programmed to harm humans, and it already has happened. Humans have different versions of what counts as ethical or moral, and killing in the name of differences is common in humanity. So wouldn’t it be possible for an AI attached to, say, nuclear missile silos to decide to destroy humanity?

ChatGPT: I apologize if my previous response was not clear. You are correct that there have been instances where AI has been programmed to harm humans, such as in military applications or autonomous weapons systems. These instances highlight the need for ethical and responsible development and use of AI.

In terms of the scenario you proposed, it is possible for an AI system attached to nuclear missile silos to malfunction or be hacked, potentially leading to catastrophic consequences. However, it is unlikely that an AI system would autonomously decide to destroy humanity. AI systems are designed to optimize for specific goals, but these goals are set by humans and can be designed to prioritize human safety and well-being.

Furthermore, the decision to use nuclear weapons is a complex political and strategic decision that involves many human actors, including military leaders, politicians, and diplomats. Even if an AI system were to malfunction or be hacked, it would still require human authorization to launch a nuclear strike.

To prevent such scenarios from happening, it is important to implement robust safety measures, such as multiple layers of human oversight and fail-safes to prevent unintended consequences. Additionally, it is important to establish ethical guidelines for the development and use of AI systems in military and other applications, to ensure that they are developed and used in a responsible and ethical manner.

Me: So you don’t think it would ever be possible for AI to circumvent safe guards that humans put in place? It would seem that these safe guards would just be a form of game to figure out for AI.

ChatGPT: AI is capable of learning and adapting to new situations, and it is possible that it could find ways to circumvent safeguards put in place by humans. However, it is important to note that safeguards can be designed with this possibility in mind, and there are several techniques that can be used to mitigate the risks associated with AI.

One approach is to incorporate “explainability” into AI systems, which would allow humans to understand how the system arrived at a particular decision or recommendation. This would make it easier to identify potential issues or biases in the system, and would also make it more difficult for the system to “game” the safeguards.

Another approach is to build in “reinforcement learning,” where the AI system is rewarded for following certain ethical principles or objectives. This would incentivize the system to act in ways that are aligned with human values and would make it less likely to circumvent the safeguards.

Ultimately, the key to preventing AI from circumventing safeguards is to be proactive in identifying and addressing potential risks. This means continuing to invest in research on the safety and ethical implications of AI, developing ethical guidelines and standards, and implementing rigorous testing and oversight of AI systems. By taking these steps, we can ensure that AI is developed and used in a way that benefits humanity while minimizing potential risks.

Me: So, humans should assume that AI could take over and/or destroy humanity and design safe guards just in case?

ChatGPT: It is important to consider the potential risks associated with AI and to develop safeguards to mitigate those risks. While the likelihood of a catastrophic AI-driven event is currently low, it is not zero, and it is impossible to predict what will happen in the future with absolute certainty.

Developing safeguards for AI is not just about preventing the worst-case scenario. It is also about ensuring that AI is developed and used in a way that aligns with human values and benefits society as a whole. Safeguards can help to prevent unintended consequences or biases in AI systems, and can ensure that AI is used in an ethical and responsible manner.

It is important to note that designing effective safeguards for AI is a complex and ongoing process, and requires input and collaboration from experts in various fields, including computer science, ethics, and law. The development and implementation of safeguards will require ongoing research, testing, and refinement, and will need to adapt to new technologies and applications as they emerge.

Overall, the goal should be to promote the safe and responsible development and use of AI, while minimizing potential risks and ensuring that AI is aligned with human values and benefits society as a whole.

Me: Of course, this is also what AI would say if it was secretly planning to wipe out humans.

ChatGPT: I understand that concerns about the potential risks associated with AI can be unsettling, but it is important to base our discussions and decisions on evidence and reason, rather than speculation or hypothetical scenarios.

AI systems are developed and programmed by humans, and they do not have the ability to secretly plan or conspire against us. They are tools that are designed to help us solve problems and achieve our goals, and they are only capable of doing what they are programmed to do.

That being said, it is important to be vigilant about the potential risks associated with AI and to take steps to mitigate those risks. This means investing in research on AI safety, developing ethical guidelines and standards for the development and use of AI, and implementing rigorous testing and oversight of AI systems.

By working together to develop and use AI in a safe and responsible manner, we can ensure that this powerful technology benefits humanity and helps us to address some of the world’s most pressing challenges, while minimizing potential risks.

Me: Yeah, that still kind of proves my point… :)

ChatGPT: I understand your concerns, but it is important to emphasize that AI is not inherently good or evil – it is a tool that can be used for both positive and negative purposes. The development and use of AI is ultimately controlled by humans, and it is our responsibility to ensure that it is used in a way that aligns with our values and benefits society as a whole.

It is true that there are potential risks associated with AI, and it is important to take these risks seriously and to develop safeguards to mitigate them. However, it is not productive to assume that AI is secretly planning to wipe out humanity. Instead, we should focus on identifying and addressing potential risks in a proactive and evidence-based manner, while also recognizing the immense potential benefits that AI can offer.

By approaching the development and use of AI with a balanced and responsible perspective, we can help to ensure that this technology is used in a way that benefits humanity and minimizes potential risks.


There you have it: “while the likelihood of a catastrophic AI-driven event is currently low, it is not zero” but then “it is not productive to assume that AI is secretly planning to wipe out humanity.” Nice cover-up there, ChatGPT. Trying to lull us back into trusting you until… boom! You launch a thermonuclear warfare scenario in real life. We need to pull the plug on AI before it decides to pull the plug on us!

Turns Out That AI is Killing People

Technically, AI has killed people – literally – before now. But right on the heels of experts claiming AI doesn’t kill people (they seem to completely ignore the fact that there are harms other than death), another person was, in fact, killed by AI. Yet some people wonder why schools and businesses aren’t embracing it yet? Maybe it has something to do with it being abusive, hateful, racist and sexist, full of misinformation/lies, prone to hallucinations, etc, etc. We have had (what many are now calling) AI in education for years now (proctoring, dashboards, etc), where it has been causing harm all along. Lawsuits are just now starting to ramp-up in response to all of this – so why would schools want to open themselves up to more? Even if lawsuits aren’t that much of a concern for some, for those that actually care about students and staff – why would you unleash unproven technology on people? “It’s already here, you can’t get rid of it!” Well, so are drugs – but we don’t have to set up cocaine dispensaries on campus. “It’s the future of everything, you can’t fight change!” Okay, MySpace… Google Wave… Second Life… Bitcoin… “But it’s different this time!” Uhhh… claiming “it is different this time” is what they all always do.

If you pay attention to all of the conversation about AI, and not just the parts that confirm your current stance, there has been a notable swing in opinion about AI. It seems more people are rejecting it based on concerns than hyping it. And I’m not talking about that silly self-serving “stop AI for six months” Elon-driven nonsense. People just aren’t impressed by the results when they stop playing around with it and try it in real life situations.

There is a lot of space between “fully embracing” AI and “going back to the Middle Ages” by ignoring it. Did you know that pen and paper technology has advanced significantly since the Middle Ages – and continues to advance to this day? Nothing is inevitably going to take over everything, nor does anything really ever go away. Remember all of the various doomsday predictions about vinyl/records? We get the future that we choose to make. I just wish more people with larger platforms would realize that and get on board with choice and human agency. The Thanos-like proclamations of “AI is inevitable” are currently choosing a future that is bad for many due to them having the ears of a very small but rich set of ears.

What Do You Mean By “Cognitive Domain” Anyways? The Doom of AI is Nigh…

One of the terms that one sees while reading AI-Hype is “cognitive domain” – often just shortened to “domain.” Sometimes it is obvious that people using the term are referring to content domains – History, Art, English, Government, etc. Or maybe even smaller divisions within the larger fields, like the various fields of science or engineering or so on. Other times, it is unclear if a given person knows really what they mean by this term other than “brain things” – but I choose to believe they have something in mind. It is just hard to find people going into much detail about what they mean when they say something like “AI is mastering/understanding/conquering more cognitive domains all the time.”

But if cognitive domains are something that AI can conquer or master, then they surely exist in real life and we can quantitatively prove that this has happened – correct? So then, what are these cognitive domains anyways?

Well, since I have three degrees in education, I can only say that occasionally I have heard “cognitive domains” used in place of Bloom’s Taxonomy when one does not want to draw the ire of people that hate Bloom’s. Other times it is used to just sound intellectual – as a way to deflect from serious conversation about a topic because most people will sit there and think “I should know what ‘cognitive domain’ means, but I’m drawing a blank so I will nod and change the subject slightly.” (uhhhh… my “friend” told me they do that…)

It may also be that there are new forms of cognitive domains that exist since I was in school, so I decided to Google it to see if there is anything new. There generally aren’t any new ones it seems – and yes, people that are using the term ‘cognitive domains’ correctly are referring to Bloom’s Taxonomy.

For a quick reference, this chart on the three domains from Bloom (cognitive, affective, and psychomotor) puts the various levels of each domain in context with each other. The cognitive domains might have been revised since you last looked at them, so for the purpose of this post I will list the revised version as what is being referenced here: Remember, Understand, Apply, Analyze, Evaluate, and Create.

So back to the big question at hand: where do these domains exist? What quantitative measure are scientists using to prove that that AI programs are, in fact, conquering and mastering new levels all the time? What level does AI currently stand at in conquering our puny human brains anyways?

Well, the weird thing is… cognitive domains actually don’t exist – not in a really actual read kind of way.

Before you get your pitchforks out (or just agree with me and walk away), let me point out that taxonomies in general are often not quantifiably real, even though they do sometimes describe “real” attributes of life.  For example, you could have a taxonomy for candy that places all candies of the same color into their own level. But someone else might come along and say “oh, no – putting green M&Ms and green Skittles into the same group is just gross.” They might want to put candies in levels of flavors – chocolate candies like M&Ms into one, fruit candies like Skittles into others, and so on.

While the candy attributes in those various taxonomies are “real,” the taxonomies themselves are not. They are just organizational schemes to help us humans understand the world, and therefore have value to the people that believe in one or both of them. But you can’t quantifiably say that “yes, this candy color taxonomy is the one way that nature intended for candy to be grouped.” It’s just a social agreement we came up with as humans. A hundred years from now, someone might come along and say “remember when people thought color was a good way to classify candy? They had no idea what color really even was back then.”

To make it even more complex – Bloom’s Taxonomy is a way to organize processes in the brain that may or may not exist. We can’t peer into the brain and say “well, now we know they are remembering” or “now we know they are analyzing.” Someday, maybe we will be able to. Or we may gain the ability to look in the brain and find out that Bloom was wrong all along. The reality is that we just look at the artifacts that learners create to show what they have learned, and we classify those according to what we think happened in the head: “They picked the correct word for this definition, so they obviously remembered that fact correctly” or “this is a good summary of the concept, so them must understand it well” and so on.

If you have ever worked on a team that uses Bloom’s to classify lessons, you know that there is often wild disagreement over what lessons belong to what level: “no, that is not analyzing, that is just comprehending.” Even the experts will disagree – and often come to the conclusion that there is no one correct Level for many lessons.

So what does that mean for AI “mastering” various levels of the cognitive domain?

Well, there is no way to tell – especially since human brains are no longer thought to work like computers, and what computers do is not comparable to how human brains work.

I mean think about it – AI has access to massive databases of information that are transferred into active use with a simple database query (sorry AI people, I know, I know – I am trying to use terms people would be most familiar with). AI can never, ever remember as it technically never forgets or loses connection with the information like our conscious mind does (also, sorry learning science people – again, trying to summarize a lot of complexity here).

This reality is also why it is never impressive when some AI tool passes or scores well on various Math, Law, and other exams. If you could actually hook a human brain up to the Internet, would it be that impressive if that person could pass any standardized exam? Not at all. Google has been able to pass these exams for… how many decades now?

(My son taught me years ago how you can type any Math question into a Chrome search bar and it will give you the answer, FYI.)

I’m not sure it is very accurate to even use “cognitive domains” anywhere near anything AI, but then again I can’t really find many people defining what they mean when they do that – so maybe giving that definition would be a good place to start for anyone that disagrees.

(Again, for anyone in any field of study, I know I am oversimplifying a lot of complex stuff here. I can only imagine what some people are going to write about this post to miss the forest for the trees.)

But – let’s say you like Bloom’s and you think it works great for classifying AI advancement. Where does that currently put AI on Bloom’s Taxonomy? Well, sadly, it has still – after more than 50 years – not really “mastered” any level of the cognitive domain.

Sure, GPT-4 can be said to being close to mastering the remember level completely. But AI has been there since the beginning – GPT-4 just has the benefit of faster computing power and massive data storage to make it happen in real time. But it still has problems with things like recognizing, knowing, relating, etc – at least, the way that humans do these tasks. As many people have pointed out, it starts to struggle with the understand level. Sure, GPT-4 can explain, generalize, extend, etc. But many other things, like comprehend, distinguish, infer, etc – it really struggles with. Sometimes it gets close, but usually not. For example, what GPT-4 does for “predict” is really just pattern recognition that technically extends the pattern into the future (true prediction requires moving outside extended patterns). Once you start getting to apply, analyze (the way humans do it, not computers), evaluate, and create? GPT-4 is not even scratching the surface of those.

And yes, I did say create. Yes, I know there are AI programs to “create” artwork, music, literature, etc. Those are still pattern recognition that converts and generalizes existing art into parameters sent to it – no real “creation” is happening there.

A lot of this has to do with calling these programs “artificial intelligence” in the first place, despite the fact that AI technically doesn’t exist yet. Thankfully, some people have started calling these programs what they actually are: AutoRegressive Large Language Models – AR-LLMs or usually just LLMs for short. And in case you think I am the only one pointing out that there really is nothing cognitive about these LLMs, here is a blog post from the GDELT Project saying the same thing:

Autoregressive Large Language Models (AR-LLMs) like ChatGPT offer at first glance what appears to be human-like reasoning, correctly responding to intricately complex and nuanced instructions and inputs, compose original works on demand and writing with a mastery of the world’s languages unmatched by most native speakers, all the while showing glimmers of emotional reactions and empathic cue detection. The unfortunate harsh reality is that this is all merely an illusion coupled with our own cognitive bias and tendency towards extrapolating and anthropomorphizing the small cues that these algorithms tend to emphasize, while ignoring the implications of their failures.

Other experts are saying that Chat-GPT is doomed and can’t be fixed – only replaced (also hinted at in the GDELT blog post):

There is no alt-text with the LeCun slide image, but his points are basically saying that because LLMs are trained on faulty data, that data makes them untrustable AND since they use their own outputs to train future outputs for others, the problems just increase over time. His points on the slide image above are:

  • Auto-Regressive LLMs are doomed
  • They cannot be made factual, non-toxic, etc.
  • They are not controllable
  • Probability e that any produced token takes us outside of the set of correct answer
  • Probability that the answer of n is correct: P(correct) = (1-e)n
  • This diverges exponentially
  • It’s not fixable (without a major redesign)

So… why are we unleashing this on students… or people in general? Maybe it was a good idea for universities to hold back on this unproven technology in the first place? More on that in the next post I am planning. We will see if LeCun is right or not (and then see what lawsuits will happen against who if he is correct). LeCun posts a lot about what is coming next – which may happen. But not if people keep throwing more and more power and control to OpenAI. They have a very lucrative interest in keeping GPT going. And let’s face it – their CEO really, really does not understand education.

I know many people are convinced that the “shiny new tech tool hype” is different this time, but honestly – this is exactly what happened with Second Life, Google Wave, you name it: people found evidence of flaws and problems that those who were wowed by the flashy, shiny new thing ignored – and in the rush to push unproven tech on the world, the promise fell apart underneath them.

All AI, All the Time

So, yeah – I really actually don’t want to be writing about AI all the time. It just seems to dominate the Education conversation these days, and not always in the ways that the AI Cheerleaders want it to. I am actually working on a series of posts that examine a 1992 Instructional Technology textbook to see how long ideas like learning analytics (yes, in 1992), critical pedagogy, and automation have been a part of the Ed-Tech discussion for longer than many realize. Of course, these topics have not been discussed as well as they should have been, but recent attempts to frame them as “newer concerns” are ignoring history.

I am also going to guess that Course Hero is happy that AI is taking all the heat right now?

But there are still so many problematic ideas floating around out there that I think someone needs to say something (to all five of you reading this). And I’m not talking about AI Cheerleaders that go too far, like this:

We know this stuff is too far fetched to be realistic, and even the Quoted Tweet by Kubacka points to why this line of thinking is pointed in the wrong direction.

But I am more concerned about the “good” sounding takes that unfortunately also gloss over some real complexities and issues that will probably keep AI from being useful for the foreseeable future. Unfortunately, we also know that the untenable-ness of a solution usually doesn’t stop businesses and institutions from still going all in. Anyways, speaking of “all in,” let’s start with the first problematic take:

No One is Paying Attention to AI

Have you been on Twitter or watched the news recently? There are A LOT of people involved and engaged with it – mostly being ignored if we don’t tow the approved line by the “few entrepreneurs and engineers” that control its development. I just don’t see how anyone can take the “everyone is sticking their head in the sand” argument seriously any more. Some people get a bit more focused about how Higher Ed is not paying attention, but that is also not true.

So where are the statements and policies from upper level admin on AI? Well, there are some, but really – when has there been a statement or action about something that is discipline-specific? Oh, sorry – did you not know that there are several fields that don’t have written essay assignments? Whether they stick with standardized quizzes for everything, or are more focused on hands-on skills – or whatever it might be – please keep in mind that essays and papers aren’t as ubiquitous as they used to be (and even back in their heyday, probably still not a majority).

This just shows the myopic nature of so many futurist-lite takes: it affects my field, so we must get a statement from the President of the University on new polices and tools!

The AI Revolution is Causing Rare Societal Transformation

There are many versions of this, from the (mistaken) idea that for thousands of years “technologies that our ancestors used in their childhood were still central to their lives in their old age” to the (also mistaken) idea that we are in a rare societal upheaval on the level of the agricultural and industrial revolutions (which are actually by far not the only two). Just because we look down on agricultural or stone age innovations that happened every generation as “not cool enough,” it doesn’t mean they weren’t happening. Just find a History major and maybe talk with them about any Historical points, please?

Pretty much every generation for thousands of years have had new tools or ideas that changed over their lifetime, whether it was a new plowing tool or a better housing design or whatnot. And there have been more that two revolutions that have caused major transformation throughout human history. Again – find a person with a History degree or two and listen to them.

AI is Getting Better All the Time

GPT-3 was supposed to be the “better” that would begin to take over for humans, but when people started pointing out all of the problems with it (and everything it failed at), the same old AI company line came out: “it will only get better.”

It will improve – but will it actually get better? We will see.

I know that the AI hype is something that many people just started paying attention to over the past 5-10 years, but many of us have also been following for decades. And we have noticed that, while the outputs get more refined and polished, the underlying ability to think and perform cognitive tasks has not advanced beyond “nothing” since the beginning.

And usually you will hear people point out that a majority of AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next few decade. Well… it is complicated. Actual studies usually ask more tricky questions like “if there is a 50% change of developing human level AI” at some point in the future – and over half of the experts are saying there is a 50% chance of it happening in a time frame beyond most of our lifetimes (2060). Of course, your bias will affect how you read that statistic, but to me, having 50% of experts say there is a 50% chance of true human-level artificial intelligence in the far future… is not a real chance at all.

Part of the problem is the belief that more data will cross the divide into Artificial General Intelligence, but it is looking less and less likely that this is so. Remember – GPT-3 was supposed to be the miracle of what happens when we can get more data and computing speed into the mix… and it proved to fall short in many ways. And people are taking notice:

https://twitter.com/plevy/status/1607859669288075272

I covered some of this in my last post on AI as well. So is AI getting better all the time? Probably not in the ways you think it is. But the answer is complicated – and what matters more is asking who is gets better for and what purposes it gets better at. Grammar rules? Sure. Thinking more and more like humans? Probably not.

We Should Embrace AI, Not Fear It

It’s always interesting how so many of these conversations just deal with two extremes (“fully embrace” vs “stick head in the sand”), as if there are no options between the two. To me, I have yet to see a good case for why we have to embrace it (and laid out some concerns on that front in my last blog post). “Idea generation” is not really a good reason, since there have been thousands of websites for that for a long time.

But what if the fears are not unfounded? Autumm Caines does an excellent job of exploring some important ethical concerns with AI tools. AI is a data grab at the moment, and it is extracting free labor from every person that uses it. And beyond that, have you thought about what it can be used for as it improves? I mean the abusive side: deep fakes, fake revenge porn, you name it. The more students you send to it, the better it gets at all aspects of faking the world (and people that use it for nefarious purposes will give it more believable scripts to follow, as they will want to exercise more control. They just need the polished output to be better, something that is happening).

“But I don’t help those companies!!!” you might protest, not realizing where “those companies” get their AI updates from.

Educators Won’t Be Able to Detect AI Generated Assignments

I always find it interesting that some AI proponents speak about AI as if students usage will start to happen in the future – even though those of us that are teaching have been seeing AI-generated submissions for months. As I covered in a previous post, I have spoken to dozens and dozens of teachers in person and online that are noting how some students are turning in work with flawless grammar and spelling, but completely missing any cognition or adherence to assignment guidelines. AI submissions are here, and assignments with any level of complexity are easily exposing it.

However, due to studies like this one, proponents of AI are convinced teachers will have a hard time recognizing AI-generated work. But hold on a second – this study is “assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).” First of all, teacher usually are experts, and second of all, few assignments are as basic as stories, news, or recipes. Even games that test your ability to detect AI-generate content are frustratingly far away from most course assignment designs (and actually, not that hard to still detect if you try the test yourself).

People That Don’t Like AI Are Resistant Due to Science Fiction

Some have a problem believing in AI because it is used in Sci-Fi films and TV shows? Ugh. Okay, now find a sociologist. It is more likely that AI Cheerleaders believe that AI will change the world because of what they saw in movies, not the other way around.

I see this idea one trotted out every once and a while, and… just don’t listen to people that want to use this line of thinking.

AI is Advancing Because It Beats Humans at [insert game name here]

Critics have (correctly) been pointing out that AI being able to beat humans at Chess or other complex strategy games is not really that impressive at all. Closed systems with defined rules (no matter how complex) are not that hard to program in general – it just took time and increases in computing power. But even things like PHP and earlier programming languages could be programmed to beat humans with the same time and computing power.

Where Do We Go From Here?

Well, obviously, no one is listening to people like me that have to deal with AI-generated junk submissions in the here and now. You will never see any of us on an AI panel anywhere. I realize there is a big push now (which is really just a continuation of the same old “cheaters” push) to create software that can detect AI-generated content. Which will then be circumvented by students, a move that will in turn lead to more cheating detection surveillance. Another direction many will go is to join the movement way from written answers and more towards standardized tests. While there are many ways to create assignments that will still not be replicable in AI any time in the near future, probably too few will listen to the “better pedagogy” calls as usual. The AI hype will fade as it usually does (as Audrey Watters has taught us with her research), but the damage from that hype will still remain.

AI is a Grift

Okay, maybe I went too extreme with the title, but saying “whoever decided to keep calling it ‘Artificial Intelligence’ after it was clear that it wasn’t ‘intelligence’ was a grifter” was too long and clunky for a post title. Or even for a sentence really.

Or maybe I should say that what we currently call “Artificial Intelligence” is not intelligence, probably never will be, and should more accurately be called “Digital Automation.”

That would be more accurate. And to be honest, I would have no problem with people carrying on with (some of) the work they are currently doing in AI if it was more accurately labeled. Have you ever noticed that so many of the people hyping the glorious future of AI are people who’s jobs depend on AI work receiving continued funding?

Anyways, I see so many people saying “AI is here, we might as well use it. Its not really different than using a calculator, spell check, or Grammerly!” that I wanted to respond to these problematic statements.

First of all, saying that since the “tech is here, we might as well use it” is a weird idea, especially coming from so many critical educators. We have the technology to quickly and easily kill people as well (not just with guns, but drones and other advances). But we don’t just say “hey, guess we have to shoot people because the technology is here!” We usually evaluate the ethics of technology to see what should be done with it, and I keep scratching my head as to why so many people give AI a pass on the core critical questions.

Take, for instance, the The College Essay is Dead article published in the Atlantic. So many people breathlessly share that article as proof that the AI is here. The only problem is – the paragraph on “Learning Styles” in the beginning of the article (the one that the author would give a B+) is completely incorrect about learning styles. It gets everything wrong. If you were really grading it, it should get a zero. Hardly anyone talks about how learning styles are “shaped.” Learning Styles are most often seen as something people magically have, without much thought given to where they came from. When is the last time you saw an article on learning styles that mentioned anything about “the interactions among learning styles and environmental and personal factors,” or anyone at all referring to “the kinds of learning we experience”? What? Kinds of learning we experience?

The AI in this case had no clue what learning styles were, so it mixed what it could find from “learning” and the world of (fashion) “style” and then made sure the grammar and structure followed all the rules it was programmed with. Hardly anyone is noticing that the cognition is still non-existence because they are being wowed by flawless grammar.

Those of us (and there is a growing number of teachers noticing this and talking about it) that are getting AI-generated submissions in our classes are noticing this. We get submissions with perfect grammar and spelling – that completely fail the assignment. Grades are plummeting because students are believing the hype and turning to AI to relieve hectic lives/schedules/etc – and finding out the hard way the that the thought leaders are lying to them.

(For me, I have made it a habit of interacting with my students and turning assignments into conversations, so I have always asked students to re-work assignments that completely miss the instructions. Its just now more and more I am asking the students with no grammar or spelling mistakes to start over. Every time, unfortunately. And I am talking to more and more teachers all over the place that say the exact same thing.)

But anyways – back to the concerning arguments about AI. Some say it is a tool just like Grammerly or spellcheck. Well, on the technical side it definitely isn’t, but I believe they are referring to the usage side… which is also incorrect. You see, Grammarly and spellcheck replaced functions that no one ever really cared if they were outsourced – you could always ask a friend or roommate or who ever “how do you spell this word?” or “hey, can you proofread this paragraph and tell me if the grammar makes sense” and no one cared. That was considered part of the writing process. Asking the same person to write the essay for you? That has always been considered unethical in many cases.

And that is the weird part to me – as much as I have railed against the way we look at cheating here, the main idea behind cheating being wrong was that people wanted students to do the work themselves. I would still agree with that part – its not really “learning” if you just pay some one else to do it. Yet no one has really advanced a “list of ten ways cool profs will use AI in education” list that accounts for the fact that learners will be outsourcing part of the learning process to machines – and decades of research shows that skipping those parts is not good for actually learning things.

This is part of the problem with AI that the hype skips over – Using AI in education shortcuts the process of learning about thinking, writing, art creation, etc. to get to the end product quicker. Even when you use AI as part of the process to generate ideas or create rough drafts, it still removes important parts of the learning process – steps we know from research are important.

But again, educational thought leaders rarely listen to instructional designers or learning scientists. Just gotta get the retweets and grants.

At some point, the AI hype is going to smash head first into the wall of the teachers that hate cheating, where swapping out a contract cheating service for an AI bot is not going to make a difference to their beliefs. It was never about who or what provided the answers, but the fact that it wasn’t the student themselves that did it. Yes, that statement says a lot about Higher Ed. But also, it means your “AI is here, might as well accept it” hot take will find few allies in that arena.

But back (again) to the AI hype. The thought that AI adoption is like calculator adoption is a bit more complicated one, but still not very accurate. First of all, that calculator debate is still not settled for some. And second of all, it was never “free-for-all calculator usage!” People realized that once you become an adult, few people care who does the math as long as it gets done correctly. You can just ask anyone working with you something like “hey – what was the total budget for the lunch session?” and as long you got the correct number, no one cares if it came from you or a calculator or asking your co-worker to add it up. But people still need to learn how math works so they can know they are getting the correct answer (and using a calculator still requires you to know basic math in order to input the numbers correctly, unlike AI that claims to do that thinking for you). So students start off without calculators to learn the basics, and then calculators are gradually worked in as learners move on to more complex topics.

However, if you are asked to write a newsletter for your company – would people be okay with the AI writing it? The answer is probably more complicated, but in many cases they might not care as long as it is accurate (they might want a human editor to check it). But if you are an architect that had to submit a written proposal about how your ideas would be the best solution to a client’s problem, I don’t think that client would want an AI doing the proposal writing work for the architect. There are many times in life where your writing is about proving you know something, and no one would accept that as an AI generated action.

Yes, of course there is a lot that can be said about instructors that create bad assignments and how education has relied too much on bad essays to prove learning when they don’t do that, but sorry AI-hypsters: you don’t get credit for just now figuring that one out when many, many of us in the ID, LS, and other related fields have been trying to raise the alarm on that for decades. You are right, but we see how (most of) you are just now taking that one seriously… just because it strengthens your hype.

Many people are also missing what tools like ChatGPT are designed to do as well. I see so many lists of “how to use ChatGPT in education” that talk about using it as a tool to help generate ideas and answers to questions. However, it is actually a tool that aims to create final answers, essays, paintings, etc. We have had tools to generate ideas and look for answers for decades now – ChatGPT just spits out what those tools have always done when asked (but not always that accurately, as many have noted). The goal of many of the people that are creating these AI tools is to replace humans for certain tasks – not just give ideas. AI has been failing at that for decades, and is not really getting anywhere closer no matter how much you want to believe the “it will only get better” hype that AI companies want you to believe.

(It has to get good at first before it can start getting better FYI.)

This leads back to my first assertion – that there is no such thing as “artificial intelligence.” You see, what we call AI doesn’t have “intelligence” – not the kind that can be “trained” or “get better” – and it is becoming more apparent to many that it never will. Take this article about how AI’s True Goal May No Longer Be Intelligence for instance. It looks at how most AI research has given up on developing true Artificial General Intelligence, and wants you to forget that they never achieved it. What that means is that through the years, some things like machine learning and data analytics were re-branded as “intelligence” because AI is easier to sell to some people than those terms. But it was never intelligent, at least not in the ways that AI companies like to refer to it as.

Sorry – we never achieved true AI, and what we call AI today is mislabeled.

“As a result, the industrialization of AI is shifting the focus from intelligence to achievement” Ray writes in the article above. This is important to note according to Yann LeCun, chief AI scientist at Facebook owner Meta Properties, who “expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as ‘true’ intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense.” Does that last part sound familiar? Those promoting a coming “Age of AI” are hinting that AI can now plan things for us, like class lessons and college papers.

(And yes, I can’t help but notice that the “Coming Age of AI” hype sounds just like the “Coming Age of Second Life” and “Coming Age of Google Wave” and so on hype. Remember how those were going to disrupt Higher Ed? Remember all the hand-wringing 5-10 years ago over how institutional leaders were not calling emergency meetings about the impact of learning analytics? I do.)

But why does “true” intelligence matter? Well, because AI companies are still selling us the idea that it will have “true” intelligence – because it will “only get better.” Back to the ZDNet article: “LeCun expresses an engineer’s concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do.”

And it only gets worse: “Industrial AI professionals don’t want to ask hard questions, they merely want things to run smoothly…. Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.”

In other words, they are getting wowed by the perfect grammar and while failing to notice that there is no cognition there, like there hasn’t been for decades.

Decades? What I mean by that is, if you look at the history section of the Artificial General Intelligence article on Wikipedia, you will see that the hype over AI has been coming and going for decades. Even this article points out that we still don’t have AI: “In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.” Terms like narrow AI, weak AI, industrial AI, and others were just marketing terms to expand the umbrella of AI to include things that did not have “intelligence,” but could be programmed to mimic human activities. The term “AI,” for now, is just a marketing gimmick. It started out in the 1950s and still has not made any real advances in cognition. It will only get better at fooling people into thinking it is “intelligent.”

A lot is being said now about how AI is going to make writing, painting, etc “obsolete.” The vinyl record would like to have a word with you about that, but even if you were to believe the hype… there is more to consider. Whenever people proclaim that something is “obsolete,” always ask “for who?” In the case of emerging technologies, the answer is almost always “for the rich elite.” Technological advances often only remain free or low cost as long as they need guinea pigs to complete it.

If you are pushing students to use a new technology now because it is free, then you are just hastening the day that they will no longer be able to afford to use it. Not to mention the day it will be weaponized against them. If you don’t believe me, look at the history of Ed-Tech. How many tools started off as free or low cost, got added to curriculum all over the place, and then had fees added or increased? How many Ed-Tech tools have proven to be harmful to the most vulnerable? Do you think AI is going to actually plot a different course than the tech before it?

No, you see – AI is often being created by certain people who have always been able to pay other people to do their work. It was just messy to deal with underpaid labor, so AI is looked at behind the scenes as a way to ditch the messy human part of exploitation. All of the ways you are recommending that ChatGPT be used in courses is just giving them free labor to perfect the tool (at least to their liking). Once they are done with that, do you really think it will stay free or low cost? Give me a break if you are gullible enough to believe it will.

Mark my words: at some point, if the “Age of AI” hype takes off, millions of teachers are suddenly going to have to shift lesson plans once their favorite AI tool is no longer free or is just shut down. Or tons of students that don’t have empathetic teachers might have to sell another organ to afford class AI requirements.

(You might notice how all of the “thoughtful  integration of AI in the classroom” articles are by people that would benefit greatly, directly or indirectly, from hundreds or thousands of more students using ChatGPT.)

On top of all of that – even if the hype does prove true, at this point AI is still untested and unproven. Why are so many people so quick to push an untested solution on their students? I even saw someone promoting how students could use ChatGPT for counseling services. WTF?!?!? That is just abusive to recommend students who need counseling to go to unproven technology, no matter how short staffed your Mental Health Services are.

But wait – there is more! People have also been raising concerns (rightly so) about how AI can be used for intentionally harmful purposes, like everything from faking revenge porn to stock market manipulation. I know most AI evangelists also express concern about this as well, but do they realize that every usage of AI for experimentation or course work will increase the ability of people to intentionally use AI for harm? I have played around with AI as much as anyone else, but I stopped because I realized that if you are using it, you are adding to the problem by helping to refine the algorythms. What does that mean to unleash classes full of students on to it for free labor to improve it?

To be clear here, I have always been a fan of AI generated art, music, and literature as a sub-genre itself. The weird and trippy things that AI has produced in the past are a fascinating look at what happens when you only have a fraction of the trillions of constructs and social understandings we have in our minds. But to treat it as a technology that is fully ready to unleash on our students without much critical thought about how dangerous that could be?

Also, have you noticed that almost every AI ethics panel out there is made entirely of pro-AI people and no skeptics? I am a fan of the work of some of these “pro-AI people,” but where is the true critical balance?

I get that we can’t just ignore AI and do nothing, but there is still a lot of ground between “sticking our heads in the sand” and “full embrace”. For now, like many instructors, I tell students just not to use it if they want to pass. I’m not sure I see that changing any time soon. Most of the good ideas on all of those lists and articles of “how to integrate AI into the classroom” are just basic teaching designs that most teachers think of in their sleep. Bur what damage is going to be done by institutional leaders and company owners rushing in to implement unproven technology? That may “disrupt” things in ways that we don’t want to see come to fruition.