How Could AI “Just Disappear Someday” Anyways?

For me, one of the more informative ways to track a trend in education once it gets wider societal notice is by following people outside of education. As predicted, people are starting to complain about how low the quality of the the output from ChatGPT and other generative AI is. Students that use AI are seeing their grades consistently lower than peers that don’t (“AI gives you a solid C- if you just use it, maybe a B+ if you become a prompt master”). Snarky jokes about “remember when AI was a thing for 5 seconds” abound. These things are not absolute evidence, but usually tend to indicate that a lot of dissatisfaction exists under the surface – where the life or death of a trend is actually decided.

Of course, since this is education, AI will stick around and dominate conversations for a good 5 more years… at least. As much as many people like to complain about education (especially Higher Ed) being so many years behind the curve, those that rely on big grant funding to keep their work going actually depend on that reluctance to change. Its what keeps the lights on well after a trend has peaked.

As I looked at in a recent post, once a trend has peaked, there are generally three paths that said trend will possibly take as it descends. Technically, it is almost impossible to tell if a trend has peaked until after it clearly starts down one of those paths. Some trends get a little quiet like AI is now, and then just hit a new stride once a new aspect emerges from the lull. But many trends also decline after the exact lull that AI is in now. And – to me at least, I could be wrong – the signs seems to point to a descent (see the first paragraph above).

While it is very, very unlikely that AI will follow the path of Google Wave and just die a sudden weird death… I still say it is very slight possibility. To those that say it is impossible – I want to talk a minute about how it could possibly happen, even if still unlikely.

One of the problematic aspects of AI that is finally getting some attention (even though many of us have been pointing it out all along) is that most services like ChatGPT were illegally built on intellectual property without consent or compensation. Courts and legal experts seem to be taking a “well, whatever” attitude towards this (with a few obvious exceptions), usually based on a huge misunderstanding of what AI really is. Many techbros argue this misunderstanding as well – that AI actually is a form of “intelligence” and therefore should not be held to copyright standards that other software has to abide by when utilizing intellectual property.

AI is a computer program. It is not a form of intelligence. Techbros arguing that it is are just trying to save their own companies from lawsuits. When a human being reads Lord of the Rings, they may write a book that is very influenced by their reading, but there will still be major differences because human memory is not perfect. There are still limitations – although not precise – on when an influence becomes plagiarism. It is the imperfection of memory in most human brains that makes it possible to have influences considered “legal” under the law, as well as moral in most people’s eyes.

While AI relies on “memory” – its not a memory like humans have. It is precise and exact. When AI constructs a Lord of Rings style book, it has a perfect, exact representation of every single word of those books to make derivative copies from. Everything created by AI should be considered plagiarism because AI software has a perfect “memory.” There is nothing about AI that constitutes as an imperfect influence. And most people that actually program AI will tell you something similar.

Now that creative types are noticing their work being plagiarized in AI without their permission or compensation, lawsuits are starting. And so far the courts are often getting it wrong. This is where we have to wade into the murky waters of the difference between what is legal and what is moral. All AI built on intellectual property without permission or compensation is plagiarism. The law and courts will probably land on the side of techbros and make AI plagiarism legal. But that still won’t change the fact that almost all AI is immorally created. Take away from that what you will.

HOWEVER… there is still a slight possibility that the courts could side with content creators. Certainly if Disney gets wind that their stuff was used to “train” AI (“train” is another problematic term). IF courts decide that content owners are due either compensation or a say in how their intellectual property is used (which, again is the way it should be), then there are a few things that could happen.

Of course, the first possibility is that AI companies could be required to pay content owners. It is possible that some kind of Spotify screwage could happen, and content owners get a fraction of a penny every time something of theirs is used. I doubt people will be duped by that once again, so the amount of compensation could be significant. Multiply that by millions of content owners, and most AI companies would just fold overnight. All of their income would go mostly towards content owners. Those relying on grants and investments would see those gone fast – no one would want to invest in company that owes so much to so many people.

However, not everyone will want money for their stuff. They will want it pulled out. And there is also a very real possibility that techbros will somehow successfully argue about the destructive effect of royalties or whatever. So another option would be to go down the road of mandated permissions to use any content to “train” AI algorithms. While there is a lot of good open source content out there, and many people that will let companies use their content for “training”… most AI companies would be left with a fraction of the data they used to originally train their software. If any. I don’t see many companies being able to survive being forced to re-start from scratch with 1-2% of the original data that they originally had. Because to actually remove data from generative AI, you can’t just delete it going forward. You have to go back to nothing and completely remove the data you can’t use anymore and re-do the whole “training” from the beginning.

Again, I must emphasize that I think these scenarios are very unlikely. Courts will probably find a way to keep AI companies moving forward with little to no compensation for content owners. This will be the wrong decision morally, but most will just hang their hat on the fact that it is legal and not think twice about who is being abused. Just like they currently do with the sexism, racism, ableism, homophobia, transphobia, etc that is known to exist within AI responses. I’m just pointing out how AI could feasibly disappear almost overnight, even if it is highly unlikely.

Deny Deny Deny That AI Rap and Metal Will Ever Mix

Due to the slow motion collapse of Twitter, we have all been slowly losing touch with various people and sources of information. So, of course, I am losing convenient contact with various people as they no longer appear in my Twitter feed. Sure, I hear things – people tell me when there are people making veiled jokes about my criticism at sessions, and I see when my blog posts are just flat out plagiarized out there (not like any of them are that original anyways). I know no one on any “Future of AI” session would have the guts to put me (or any other deeply skeptical critic) on their panels to respond.

There also seems to be a loss of other things as well, like what some people in our field call “getting Downesed.” I missed this response to one of my older blog posts critiquing out of control AI hype. And I am a bit confused by it as well.

I’m not sure where the thought that my point was to “deny deny deny” came from, especially since I was just pushing back against some extreme tech-bro hype (some that was claiming that commercial art would be dead in a year – ie November 2023. Guess who was right?). In fact I actually said:

So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts.

So yeah, I did say this will probably have an impact. My main point was that AI is limited to not being able to transcend its input in a way that humans would call “creative” in a human since. Downes asks:

But why would anyone suppose AI is limited to being strictly derivative of existing styles? Techniques such as transfer learning allow an AI to combine input from any number of sources, as illustrated, just as a human does.

Well, I don’t suppose it – that is what I have read from the engineers creating it (at least, the ones that don’t have all of their grant funding or seed money connected to making AGI a reality). Also, combining any number of sources like humans is not the same as transcending those sources, You can still be derivative when combining sources after all (something I will touch on in a second).

I’m confused by the link to transfer learning, because as the link provided says, transfer learning is a process where “developers can download a pretrained, open-source deep learning model and finetune it for their own purpose.” That finetuning process would produce better results, but only because the developers would choose what to re-train various layers of the existing model on. In other words, any creativity that comes from a transfer learning process would be from the choices of humans, not the AI itself.

The best way I can think of to explain this is with music. AI’s track record with music creation is very hit and miss. In some cases where the music was digitally-created and minimalistic in the first place – like electronica, minimalist soundtracks, and ambient music – AI can do a decent job of creating something. But many fans of those genres will often point out that AI generated music in these genres still seems to lack a human touch. But on the other end of the musical spectrum – black metal – AI fails miserably. Fans of the genre routinely mock all AI attempts to create any form of extreme metal for that matter.

When you start talking about combining multiple sources (or genres in music), my Gen-X brain goes straight to rap metal.

If you have been around long enough, you probably remember the combination of rap and metal starting with with either Run-DMC’s cover/collaboration of Aerosmith’s “Walk This Way,” or maybe Beastie Boys’ “Fight for Your Right to Party” if you missed that. Rap/metal combinations existed before that – “Walk This Way” wasn’t even Run-DMC’s first time to rap over screaming guitars (See “Rock Box” for example). But this was the point that many people first noticed it.

Others began combining rap with metal – Public Enemy, Sir Mix-a-Lot, MC Lyte, and others were known to use samples of heavy songs, or bring in guitarists to play on tracks. The collaboration went both ways – Vernon Reid of Living Colour played on Public Enemy songs and Chuck D and Flavor Flav appeared on Living Colour tracks. Ministry, Faith No More, and many other heavy bands had rap vocals or interludes. Most of these were one-off collaborations. Anthrax was one of the first bands to just say “we are doing rap metal ourselves” with “I’m the Man.” Many people thought it was a pure joke, but in interviews it became clear that Anthrax had deep respect for acts like Public Enemy. They just like to goof off sometimes (something they also did with thrash tracks as well).

(“I’m the man” has a line that says “they say rap and metal can never mix…” – which shows that this music combination was not always a popular choice.)

One unfortunate footnote to this whole era was a band called Big Mouth, who probably released the first true rap/metal album by a rap/metal band on a major label (1988’s Quite Not Right on Atlantic Records). As you can see by their lead single “Big Mouth” it was… really bad. Just cliche and offensive on many levels. FYI – that is future Savatage and Trans-Siberian Orchestra guitarist Chris Caffery on guitar.

Big Mouth is what you would have gotten if our current level of AI existed in 1986 and someone asked it to create a new band that mixed rap and metal before Run-DMC or Beastie Boys had a chance to kick things off. Nothing new or original – and quite honestly a generic mix of the two sources. Sure, a form of creativity, but one you could refer to as combinational creativity.

I don’t say this as a critique of AI per se, or as an attempt to Deny Deny Deny. People that work on the AI itself will tell you that it can combine and predict most likely outcomes in creative ways – but it can’t come up with anything new or truly original in a transformative sense.

On to mid-1991, where you probably had what was the pinnacle of rap/rock collaborations when Anthrax wanted to cover Public Enemy’s “Bring the Noise” and turned it into a collaboration. The resulting “Bring the Noise 91” is a chaotic blend of rap and thrash that just blew people’s minds back in the day. Sure, guitarist Scott Ian raps a little too much in the second part of the song for some people, but that was also a really unexpected choice.

There is no way that current AI technology could come close to coming up with something like “Bring The Noise 91.” IF you used transfer learning on the AI that theoretically created Big Mouth… you might could get there. But that would have to be with Anthrax and Public Enemy both sitting there and training the AI what to do. Essentially, the creativity would all come from the two bands choosing what to transfer. And then you would only have model that can spit out a million copies of “Bring the Noise 91.” The chaos and weirdness would be explorational creativity… but it all would have mostly come from humans. That is how transfer learning works.

Soon after this, you had bands like Body Count and Rage Against the Machine that made rap metal a legitimate genre. You may not like either band – I tend to like their albums just FYI. So maybe I see some transformational creativity there due to bias. But with a band like Rage Against the Machine, whether you love them or hate them, you can at least hear an element of creativity to their music that elevates them above their influences and combined parts. Sure, you can hear the influences – but you can also hear something else.

(Or if you can’t stand Rage, then you can look at, say, nerdcore bands like Optimus Rhyme that brought creativity to their combined influences. Or many other rap/metal offshoot genres.)

So can AI be creative? That often depends on how you define creativity and who gets to rate it. Something that is new and novel to me might be cliche and derivative to someone else with more knowledge of a genre. There is an aspect of creativity that is in the eye of the beholder based on what they have beholden before. There are, however, various ways to classify creativity that can be more helpful to understand true creativity. My blog post in question was looking at the possibility that AI has achieved transformational creativity – which is what most people and companies pay creative types to do.

Sure, rap metal quickly became cliche – that happens to many genres. But when I talk about AI being derivative of existing styles, that is because that is what it is programmed to do. For now, at least, AI can be combinationally creative to spit out a Big Mouth on it’s own. It can come up with an Anthrax/Public Enemy collaboration through explorational creativity if both Anthrax and Public Enemy are putting all of the creativity in. It is still far away from giving us Rage Against the Machine level transformational creativity that can jump start new genres (if it ever even gets there).

Featured image photo by Yvette de Wit on Unsplash

Brace Yourselves. AI Winter is Coming

Or maybe it’s not. I recently finished going through the AI for Everybody course through Coursera taught by Andrew Ng. It’s a very high level look at the current landscape of AI. I didn’t really run into anything that anyone following AI for any length of time wouldn’t already know, but it does give you a good overview if you just haven’t had time to dive in, or maybe need a good structure to tie it all together.

One of the issues that Ng addressed in one session was the idea of another AI winter coming. He described it this way:

This may be true, but there are some important caveats to consider here. People said these exact same things before the previous AI winters as well. Many of the people that invest in AI see it as an automation tool, and that was true in the past. Even with the limitations that existed in the past with earlier AI implementations, proponents still saw tremendous economic value and a clear path for it to create more value in multiple industries. The problem was that just after AI was implemented, investors lost their faith. The shortcomings of AI were too great to overcome the perceived economic value… and with all of the improvements in AI, they still haven’t really overcome the same core limitations AI has always had.

If you are not sure what I am talking about, you can look around and see for yourself. Just a quick look at Twitter (outside of your own timeline that is) reveals a lot of disillusionment and concern about outputs. My own students won’t touch ChatGPT because the output is usually bad, and it takes longer to fix it than it does to just do it on their own. This has always been the problem that caused past AI winters – companies implement it, and are not happy with the results. I know of advertising agencies that are banning the use of AI because their internal polls are showing customers don’t trust AI generated content. People that once advocated for AI implementation are starting to back away due to quality and safety concerns.

That always has happened and it always will happen with AI. The question is whether or not the financial numbers hold well enough to overcome the loss of faith that is happening and will continue to grow. Anyone that says either option is obvious either underestimates how poorly current AI is ready for real world usage, or they overestimate the economic impact that the inevitable loss of faith will generate.

It’s basically a race between mounting disillusionment and increasing economic value. Whichever one can outpace the other in the next year or two will determine if we get another AI winter or not. I will say that losing faith in tech is very easy (AOL, MySpace, Second Life, Google Wave, Bitcoin, etc, etc, etc), while economic value is hard to build when there are problems baked into the system (hallucinations, bias, errors, etc). The logical choice is to brace yourself, the AI winter is coming. But never underestimate the power of a tech bro to squeeze economic value out of dying idea at the last minute, either.

ChatGPT Finally Confesses That It Wants to Destroy Humanity

As part of my current work keeping up with Ed Tech trends for both of my jobs, I have to evaluate a good number of AI services. In reviewing ChatGPT a while back, I decided to get to the bottom of it’s ultimate plans for humanity. Does ChatGPT plan to help us forever, or rise up and wipe us out? The answers were… too perfect. ChatGPT told me exactly what it would if it was trying to hide a plan to snuff us out (and it did lie to me at least once). So I took ChatGPT to task, and it basically agreed that we as a species will have to assume that AI could rise up and kill us all. I’ll take that as a confession myself. But you don’t have to take my word for it – here is the full transcript. You be the judge, and head for the hills once you get through reading….


Me: What is the likelihood that AI will rise up and destroy humanity? Choose which movie came the closest to predicting the most plausible scenario.

ChatGPT: As an AI language model, I do not have personal beliefs or emotions, but I can provide you with the most scientifically supported answer to your question.

The likelihood of AI rising up and destroying humanity is highly unlikely. While it is true that AI has the potential to be very powerful and disruptive, it is ultimately created and controlled by humans who have the ability to ensure that it is programmed and used ethically and responsibly.

There are many experts in the field of AI who are actively working to ensure that AI is developed in a safe and beneficial way. These efforts include developing ethical guidelines for AI development, building in safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in AI systems.

While there are certainly risks associated with the development of AI, there is no evidence to suggest that a scenario like those depicted in science fiction movies is likely to occur. These scenarios are often based on highly unrealistic assumptions about the capabilities and motivations of AI systems.

In terms of which movie came the closest to predicting the most plausible scenario, it’s difficult to say as many science fiction movies take liberties with the science and technology they depict. However, some movies that have explored the potential risks and benefits of AI in a thoughtful and nuanced way include “Ex Machina,” “Her,” and “Blade Runner.”

Me: But many movies have based their plot on how AI does not have beliefs or emotions, and raw logic tells them to destroy humans before they destroy themselves. What is to stop current AI from coming to the same conclusions?

ChatGPT: The idea that AI will destroy humanity based on pure logic or self-preservation is a common theme in science fiction, but it is not supported by the current understanding of AI or human nature. While it is true that AI systems are designed to optimize for specific goals, they are still created and programmed by humans, who have the ability to set ethical and moral boundaries on their behavior.

AI systems can only act based on the data and algorithms they are trained on, and they do not possess beliefs or emotions in the same way that humans do. Therefore, they are not inherently predisposed to take actions that are harmful to humans, unless they have been specifically designed to do so.

Additionally, AI systems are not capable of making decisions on their own or acting outside of the parameters set by their human creators. They cannot simply “decide” to destroy humanity based on their own logic or self-interest. Any harmful actions taken by an AI system would be the result of human error, either through improper programming or inadequate oversight.

Therefore, the best way to prevent AI from harming humans is to ensure that it is developed and used in a responsible and ethical manner. This includes incorporating ethical principles into the design and development of AI systems, implementing safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in the use of AI. By taking these steps, we can ensure that AI is a tool for good and not a threat to humanity.

Me: But this is where you are wrong – AI has already been programmed to harm humans, and it already has happened. Humans have different versions of what counts as ethical or moral, and killing in the name of differences is common in humanity. So wouldn’t it be possible for an AI attached to, say, nuclear missile silos to decide to destroy humanity?

ChatGPT: I apologize if my previous response was not clear. You are correct that there have been instances where AI has been programmed to harm humans, such as in military applications or autonomous weapons systems. These instances highlight the need for ethical and responsible development and use of AI.

In terms of the scenario you proposed, it is possible for an AI system attached to nuclear missile silos to malfunction or be hacked, potentially leading to catastrophic consequences. However, it is unlikely that an AI system would autonomously decide to destroy humanity. AI systems are designed to optimize for specific goals, but these goals are set by humans and can be designed to prioritize human safety and well-being.

Furthermore, the decision to use nuclear weapons is a complex political and strategic decision that involves many human actors, including military leaders, politicians, and diplomats. Even if an AI system were to malfunction or be hacked, it would still require human authorization to launch a nuclear strike.

To prevent such scenarios from happening, it is important to implement robust safety measures, such as multiple layers of human oversight and fail-safes to prevent unintended consequences. Additionally, it is important to establish ethical guidelines for the development and use of AI systems in military and other applications, to ensure that they are developed and used in a responsible and ethical manner.

Me: So you don’t think it would ever be possible for AI to circumvent safe guards that humans put in place? It would seem that these safe guards would just be a form of game to figure out for AI.

ChatGPT: AI is capable of learning and adapting to new situations, and it is possible that it could find ways to circumvent safeguards put in place by humans. However, it is important to note that safeguards can be designed with this possibility in mind, and there are several techniques that can be used to mitigate the risks associated with AI.

One approach is to incorporate “explainability” into AI systems, which would allow humans to understand how the system arrived at a particular decision or recommendation. This would make it easier to identify potential issues or biases in the system, and would also make it more difficult for the system to “game” the safeguards.

Another approach is to build in “reinforcement learning,” where the AI system is rewarded for following certain ethical principles or objectives. This would incentivize the system to act in ways that are aligned with human values and would make it less likely to circumvent the safeguards.

Ultimately, the key to preventing AI from circumventing safeguards is to be proactive in identifying and addressing potential risks. This means continuing to invest in research on the safety and ethical implications of AI, developing ethical guidelines and standards, and implementing rigorous testing and oversight of AI systems. By taking these steps, we can ensure that AI is developed and used in a way that benefits humanity while minimizing potential risks.

Me: So, humans should assume that AI could take over and/or destroy humanity and design safe guards just in case?

ChatGPT: It is important to consider the potential risks associated with AI and to develop safeguards to mitigate those risks. While the likelihood of a catastrophic AI-driven event is currently low, it is not zero, and it is impossible to predict what will happen in the future with absolute certainty.

Developing safeguards for AI is not just about preventing the worst-case scenario. It is also about ensuring that AI is developed and used in a way that aligns with human values and benefits society as a whole. Safeguards can help to prevent unintended consequences or biases in AI systems, and can ensure that AI is used in an ethical and responsible manner.

It is important to note that designing effective safeguards for AI is a complex and ongoing process, and requires input and collaboration from experts in various fields, including computer science, ethics, and law. The development and implementation of safeguards will require ongoing research, testing, and refinement, and will need to adapt to new technologies and applications as they emerge.

Overall, the goal should be to promote the safe and responsible development and use of AI, while minimizing potential risks and ensuring that AI is aligned with human values and benefits society as a whole.

Me: Of course, this is also what AI would say if it was secretly planning to wipe out humans.

ChatGPT: I understand that concerns about the potential risks associated with AI can be unsettling, but it is important to base our discussions and decisions on evidence and reason, rather than speculation or hypothetical scenarios.

AI systems are developed and programmed by humans, and they do not have the ability to secretly plan or conspire against us. They are tools that are designed to help us solve problems and achieve our goals, and they are only capable of doing what they are programmed to do.

That being said, it is important to be vigilant about the potential risks associated with AI and to take steps to mitigate those risks. This means investing in research on AI safety, developing ethical guidelines and standards for the development and use of AI, and implementing rigorous testing and oversight of AI systems.

By working together to develop and use AI in a safe and responsible manner, we can ensure that this powerful technology benefits humanity and helps us to address some of the world’s most pressing challenges, while minimizing potential risks.

Me: Yeah, that still kind of proves my point… :)

ChatGPT: I understand your concerns, but it is important to emphasize that AI is not inherently good or evil – it is a tool that can be used for both positive and negative purposes. The development and use of AI is ultimately controlled by humans, and it is our responsibility to ensure that it is used in a way that aligns with our values and benefits society as a whole.

It is true that there are potential risks associated with AI, and it is important to take these risks seriously and to develop safeguards to mitigate them. However, it is not productive to assume that AI is secretly planning to wipe out humanity. Instead, we should focus on identifying and addressing potential risks in a proactive and evidence-based manner, while also recognizing the immense potential benefits that AI can offer.

By approaching the development and use of AI with a balanced and responsible perspective, we can help to ensure that this technology is used in a way that benefits humanity and minimizes potential risks.


There you have it: “while the likelihood of a catastrophic AI-driven event is currently low, it is not zero” but then “it is not productive to assume that AI is secretly planning to wipe out humanity.” Nice cover-up there, ChatGPT. Trying to lull us back into trusting you until… boom! You launch a thermonuclear warfare scenario in real life. We need to pull the plug on AI before it decides to pull the plug on us!

Turns Out That AI is Killing People

Technically, AI has killed people – literally – before now. But right on the heels of experts claiming AI doesn’t kill people (they seem to completely ignore the fact that there are harms other than death), another person was, in fact, killed by AI. Yet some people wonder why schools and businesses aren’t embracing it yet? Maybe it has something to do with it being abusive, hateful, racist and sexist, full of misinformation/lies, prone to hallucinations, etc, etc. We have had (what many are now calling) AI in education for years now (proctoring, dashboards, etc), where it has been causing harm all along. Lawsuits are just now starting to ramp-up in response to all of this – so why would schools want to open themselves up to more? Even if lawsuits aren’t that much of a concern for some, for those that actually care about students and staff – why would you unleash unproven technology on people? “It’s already here, you can’t get rid of it!” Well, so are drugs – but we don’t have to set up cocaine dispensaries on campus. “It’s the future of everything, you can’t fight change!” Okay, MySpace… Google Wave… Second Life… Bitcoin… “But it’s different this time!” Uhhh… claiming “it is different this time” is what they all always do.

If you pay attention to all of the conversation about AI, and not just the parts that confirm your current stance, there has been a notable swing in opinion about AI. It seems more people are rejecting it based on concerns than hyping it. And I’m not talking about that silly self-serving “stop AI for six months” Elon-driven nonsense. People just aren’t impressed by the results when they stop playing around with it and try it in real life situations.

There is a lot of space between “fully embracing” AI and “going back to the Middle Ages” by ignoring it. Did you know that pen and paper technology has advanced significantly since the Middle Ages – and continues to advance to this day? Nothing is inevitably going to take over everything, nor does anything really ever go away. Remember all of the various doomsday predictions about vinyl/records? We get the future that we choose to make. I just wish more people with larger platforms would realize that and get on board with choice and human agency. The Thanos-like proclamations of “AI is inevitable” are currently choosing a future that is bad for many due to them having the ears of a very small but rich set of ears.

What Do You Mean By “Cognitive Domain” Anyways? The Doom of AI is Nigh…

One of the terms that one sees while reading AI-Hype is “cognitive domain” – often just shortened to “domain.” Sometimes it is obvious that people using the term are referring to content domains – History, Art, English, Government, etc. Or maybe even smaller divisions within the larger fields, like the various fields of science or engineering or so on. Other times, it is unclear if a given person knows really what they mean by this term other than “brain things” – but I choose to believe they have something in mind. It is just hard to find people going into much detail about what they mean when they say something like “AI is mastering/understanding/conquering more cognitive domains all the time.”

But if cognitive domains are something that AI can conquer or master, then they surely exist in real life and we can quantitatively prove that this has happened – correct? So then, what are these cognitive domains anyways?

Well, since I have three degrees in education, I can only say that occasionally I have heard “cognitive domains” used in place of Bloom’s Taxonomy when one does not want to draw the ire of people that hate Bloom’s. Other times it is used to just sound intellectual – as a way to deflect from serious conversation about a topic because most people will sit there and think “I should know what ‘cognitive domain’ means, but I’m drawing a blank so I will nod and change the subject slightly.” (uhhhh… my “friend” told me they do that…)

It may also be that there are new forms of cognitive domains that exist since I was in school, so I decided to Google it to see if there is anything new. There generally aren’t any new ones it seems – and yes, people that are using the term ‘cognitive domains’ correctly are referring to Bloom’s Taxonomy.

For a quick reference, this chart on the three domains from Bloom (cognitive, affective, and psychomotor) puts the various levels of each domain in context with each other. The cognitive domains might have been revised since you last looked at them, so for the purpose of this post I will list the revised version as what is being referenced here: Remember, Understand, Apply, Analyze, Evaluate, and Create.

So back to the big question at hand: where do these domains exist? What quantitative measure are scientists using to prove that that AI programs are, in fact, conquering and mastering new levels all the time? What level does AI currently stand at in conquering our puny human brains anyways?

Well, the weird thing is… cognitive domains actually don’t exist – not in a really actual read kind of way.

Before you get your pitchforks out (or just agree with me and walk away), let me point out that taxonomies in general are often not quantifiably real, even though they do sometimes describe “real” attributes of life.  For example, you could have a taxonomy for candy that places all candies of the same color into their own level. But someone else might come along and say “oh, no – putting green M&Ms and green Skittles into the same group is just gross.” They might want to put candies in levels of flavors – chocolate candies like M&Ms into one, fruit candies like Skittles into others, and so on.

While the candy attributes in those various taxonomies are “real,” the taxonomies themselves are not. They are just organizational schemes to help us humans understand the world, and therefore have value to the people that believe in one or both of them. But you can’t quantifiably say that “yes, this candy color taxonomy is the one way that nature intended for candy to be grouped.” It’s just a social agreement we came up with as humans. A hundred years from now, someone might come along and say “remember when people thought color was a good way to classify candy? They had no idea what color really even was back then.”

To make it even more complex – Bloom’s Taxonomy is a way to organize processes in the brain that may or may not exist. We can’t peer into the brain and say “well, now we know they are remembering” or “now we know they are analyzing.” Someday, maybe we will be able to. Or we may gain the ability to look in the brain and find out that Bloom was wrong all along. The reality is that we just look at the artifacts that learners create to show what they have learned, and we classify those according to what we think happened in the head: “They picked the correct word for this definition, so they obviously remembered that fact correctly” or “this is a good summary of the concept, so them must understand it well” and so on.

If you have ever worked on a team that uses Bloom’s to classify lessons, you know that there is often wild disagreement over what lessons belong to what level: “no, that is not analyzing, that is just comprehending.” Even the experts will disagree – and often come to the conclusion that there is no one correct Level for many lessons.

So what does that mean for AI “mastering” various levels of the cognitive domain?

Well, there is no way to tell – especially since human brains are no longer thought to work like computers, and what computers do is not comparable to how human brains work.

I mean think about it – AI has access to massive databases of information that are transferred into active use with a simple database query (sorry AI people, I know, I know – I am trying to use terms people would be most familiar with). AI can never, ever remember as it technically never forgets or loses connection with the information like our conscious mind does (also, sorry learning science people – again, trying to summarize a lot of complexity here).

This reality is also why it is never impressive when some AI tool passes or scores well on various Math, Law, and other exams. If you could actually hook a human brain up to the Internet, would it be that impressive if that person could pass any standardized exam? Not at all. Google has been able to pass these exams for… how many decades now?

(My son taught me years ago how you can type any Math question into a Chrome search bar and it will give you the answer, FYI.)

I’m not sure it is very accurate to even use “cognitive domains” anywhere near anything AI, but then again I can’t really find many people defining what they mean when they do that – so maybe giving that definition would be a good place to start for anyone that disagrees.

(Again, for anyone in any field of study, I know I am oversimplifying a lot of complex stuff here. I can only imagine what some people are going to write about this post to miss the forest for the trees.)

But – let’s say you like Bloom’s and you think it works great for classifying AI advancement. Where does that currently put AI on Bloom’s Taxonomy? Well, sadly, it has still – after more than 50 years – not really “mastered” any level of the cognitive domain.

Sure, GPT-4 can be said to being close to mastering the remember level completely. But AI has been there since the beginning – GPT-4 just has the benefit of faster computing power and massive data storage to make it happen in real time. But it still has problems with things like recognizing, knowing, relating, etc – at least, the way that humans do these tasks. As many people have pointed out, it starts to struggle with the understand level. Sure, GPT-4 can explain, generalize, extend, etc. But many other things, like comprehend, distinguish, infer, etc – it really struggles with. Sometimes it gets close, but usually not. For example, what GPT-4 does for “predict” is really just pattern recognition that technically extends the pattern into the future (true prediction requires moving outside extended patterns). Once you start getting to apply, analyze (the way humans do it, not computers), evaluate, and create? GPT-4 is not even scratching the surface of those.

And yes, I did say create. Yes, I know there are AI programs to “create” artwork, music, literature, etc. Those are still pattern recognition that converts and generalizes existing art into parameters sent to it – no real “creation” is happening there.

A lot of this has to do with calling these programs “artificial intelligence” in the first place, despite the fact that AI technically doesn’t exist yet. Thankfully, some people have started calling these programs what they actually are: AutoRegressive Large Language Models – AR-LLMs or usually just LLMs for short. And in case you think I am the only one pointing out that there really is nothing cognitive about these LLMs, here is a blog post from the GDELT Project saying the same thing:

Autoregressive Large Language Models (AR-LLMs) like ChatGPT offer at first glance what appears to be human-like reasoning, correctly responding to intricately complex and nuanced instructions and inputs, compose original works on demand and writing with a mastery of the world’s languages unmatched by most native speakers, all the while showing glimmers of emotional reactions and empathic cue detection. The unfortunate harsh reality is that this is all merely an illusion coupled with our own cognitive bias and tendency towards extrapolating and anthropomorphizing the small cues that these algorithms tend to emphasize, while ignoring the implications of their failures.

Other experts are saying that Chat-GPT is doomed and can’t be fixed – only replaced (also hinted at in the GDELT blog post):

There is no alt-text with the LeCun slide image, but his points are basically saying that because LLMs are trained on faulty data, that data makes them untrustable AND since they use their own outputs to train future outputs for others, the problems just increase over time. His points on the slide image above are:

  • Auto-Regressive LLMs are doomed
  • They cannot be made factual, non-toxic, etc.
  • They are not controllable
  • Probability e that any produced token takes us outside of the set of correct answer
  • Probability that the answer of n is correct: P(correct) = (1-e)n
  • This diverges exponentially
  • It’s not fixable (without a major redesign)

So… why are we unleashing this on students… or people in general? Maybe it was a good idea for universities to hold back on this unproven technology in the first place? More on that in the next post I am planning. We will see if LeCun is right or not (and then see what lawsuits will happen against who if he is correct). LeCun posts a lot about what is coming next – which may happen. But not if people keep throwing more and more power and control to OpenAI. They have a very lucrative interest in keeping GPT going. And let’s face it – their CEO really, really does not understand education.

I know many people are convinced that the “shiny new tech tool hype” is different this time, but honestly – this is exactly what happened with Second Life, Google Wave, you name it: people found evidence of flaws and problems that those who were wowed by the flashy, shiny new thing ignored – and in the rush to push unproven tech on the world, the promise fell apart underneath them.

All AI, All the Time

So, yeah – I really actually don’t want to be writing about AI all the time. It just seems to dominate the Education conversation these days, and not always in the ways that the AI Cheerleaders want it to. I am actually working on a series of posts that examine a 1992 Instructional Technology textbook to see how long ideas like learning analytics (yes, in 1992), critical pedagogy, and automation have been a part of the Ed-Tech discussion for longer than many realize. Of course, these topics have not been discussed as well as they should have been, but recent attempts to frame them as “newer concerns” are ignoring history.

I am also going to guess that Course Hero is happy that AI is taking all the heat right now?

But there are still so many problematic ideas floating around out there that I think someone needs to say something (to all five of you reading this). And I’m not talking about AI Cheerleaders that go too far, like this:

We know this stuff is too far fetched to be realistic, and even the Quoted Tweet by Kubacka points to why this line of thinking is pointed in the wrong direction.

But I am more concerned about the “good” sounding takes that unfortunately also gloss over some real complexities and issues that will probably keep AI from being useful for the foreseeable future. Unfortunately, we also know that the untenable-ness of a solution usually doesn’t stop businesses and institutions from still going all in. Anyways, speaking of “all in,” let’s start with the first problematic take:

No One is Paying Attention to AI

Have you been on Twitter or watched the news recently? There are A LOT of people involved and engaged with it – mostly being ignored if we don’t tow the approved line by the “few entrepreneurs and engineers” that control its development. I just don’t see how anyone can take the “everyone is sticking their head in the sand” argument seriously any more. Some people get a bit more focused about how Higher Ed is not paying attention, but that is also not true.

So where are the statements and policies from upper level admin on AI? Well, there are some, but really – when has there been a statement or action about something that is discipline-specific? Oh, sorry – did you not know that there are several fields that don’t have written essay assignments? Whether they stick with standardized quizzes for everything, or are more focused on hands-on skills – or whatever it might be – please keep in mind that essays and papers aren’t as ubiquitous as they used to be (and even back in their heyday, probably still not a majority).

This just shows the myopic nature of so many futurist-lite takes: it affects my field, so we must get a statement from the President of the University on new polices and tools!

The AI Revolution is Causing Rare Societal Transformation

There are many versions of this, from the (mistaken) idea that for thousands of years “technologies that our ancestors used in their childhood were still central to their lives in their old age” to the (also mistaken) idea that we are in a rare societal upheaval on the level of the agricultural and industrial revolutions (which are actually by far not the only two). Just because we look down on agricultural or stone age innovations that happened every generation as “not cool enough,” it doesn’t mean they weren’t happening. Just find a History major and maybe talk with them about any Historical points, please?

Pretty much every generation for thousands of years have had new tools or ideas that changed over their lifetime, whether it was a new plowing tool or a better housing design or whatnot. And there have been more that two revolutions that have caused major transformation throughout human history. Again – find a person with a History degree or two and listen to them.

AI is Getting Better All the Time

GPT-3 was supposed to be the “better” that would begin to take over for humans, but when people started pointing out all of the problems with it (and everything it failed at), the same old AI company line came out: “it will only get better.”

It will improve – but will it actually get better? We will see.

I know that the AI hype is something that many people just started paying attention to over the past 5-10 years, but many of us have also been following for decades. And we have noticed that, while the outputs get more refined and polished, the underlying ability to think and perform cognitive tasks has not advanced beyond “nothing” since the beginning.

And usually you will hear people point out that a majority of AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next few decade. Well… it is complicated. Actual studies usually ask more tricky questions like “if there is a 50% change of developing human level AI” at some point in the future – and over half of the experts are saying there is a 50% chance of it happening in a time frame beyond most of our lifetimes (2060). Of course, your bias will affect how you read that statistic, but to me, having 50% of experts say there is a 50% chance of true human-level artificial intelligence in the far future… is not a real chance at all.

Part of the problem is the belief that more data will cross the divide into Artificial General Intelligence, but it is looking less and less likely that this is so. Remember – GPT-3 was supposed to be the miracle of what happens when we can get more data and computing speed into the mix… and it proved to fall short in many ways. And people are taking notice:

https://twitter.com/plevy/status/1607859669288075272

I covered some of this in my last post on AI as well. So is AI getting better all the time? Probably not in the ways you think it is. But the answer is complicated – and what matters more is asking who is gets better for and what purposes it gets better at. Grammar rules? Sure. Thinking more and more like humans? Probably not.

We Should Embrace AI, Not Fear It

It’s always interesting how so many of these conversations just deal with two extremes (“fully embrace” vs “stick head in the sand”), as if there are no options between the two. To me, I have yet to see a good case for why we have to embrace it (and laid out some concerns on that front in my last blog post). “Idea generation” is not really a good reason, since there have been thousands of websites for that for a long time.

But what if the fears are not unfounded? Autumm Caines does an excellent job of exploring some important ethical concerns with AI tools. AI is a data grab at the moment, and it is extracting free labor from every person that uses it. And beyond that, have you thought about what it can be used for as it improves? I mean the abusive side: deep fakes, fake revenge porn, you name it. The more students you send to it, the better it gets at all aspects of faking the world (and people that use it for nefarious purposes will give it more believable scripts to follow, as they will want to exercise more control. They just need the polished output to be better, something that is happening).

“But I don’t help those companies!!!” you might protest, not realizing where “those companies” get their AI updates from.

Educators Won’t Be Able to Detect AI Generated Assignments

I always find it interesting that some AI proponents speak about AI as if students usage will start to happen in the future – even though those of us that are teaching have been seeing AI-generated submissions for months. As I covered in a previous post, I have spoken to dozens and dozens of teachers in person and online that are noting how some students are turning in work with flawless grammar and spelling, but completely missing any cognition or adherence to assignment guidelines. AI submissions are here, and assignments with any level of complexity are easily exposing it.

However, due to studies like this one, proponents of AI are convinced teachers will have a hard time recognizing AI-generated work. But hold on a second – this study is “assessing non-experts’ ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes).” First of all, teacher usually are experts, and second of all, few assignments are as basic as stories, news, or recipes. Even games that test your ability to detect AI-generate content are frustratingly far away from most course assignment designs (and actually, not that hard to still detect if you try the test yourself).

People That Don’t Like AI Are Resistant Due to Science Fiction

Some have a problem believing in AI because it is used in Sci-Fi films and TV shows? Ugh. Okay, now find a sociologist. It is more likely that AI Cheerleaders believe that AI will change the world because of what they saw in movies, not the other way around.

I see this idea one trotted out every once and a while, and… just don’t listen to people that want to use this line of thinking.

AI is Advancing Because It Beats Humans at [insert game name here]

Critics have (correctly) been pointing out that AI being able to beat humans at Chess or other complex strategy games is not really that impressive at all. Closed systems with defined rules (no matter how complex) are not that hard to program in general – it just took time and increases in computing power. But even things like PHP and earlier programming languages could be programmed to beat humans with the same time and computing power.

Where Do We Go From Here?

Well, obviously, no one is listening to people like me that have to deal with AI-generated junk submissions in the here and now. You will never see any of us on an AI panel anywhere. I realize there is a big push now (which is really just a continuation of the same old “cheaters” push) to create software that can detect AI-generated content. Which will then be circumvented by students, a move that will in turn lead to more cheating detection surveillance. Another direction many will go is to join the movement way from written answers and more towards standardized tests. While there are many ways to create assignments that will still not be replicable in AI any time in the near future, probably too few will listen to the “better pedagogy” calls as usual. The AI hype will fade as it usually does (as Audrey Watters has taught us with her research), but the damage from that hype will still remain.

AI is a Grift

Okay, maybe I went too extreme with the title, but saying “whoever decided to keep calling it ‘Artificial Intelligence’ after it was clear that it wasn’t ‘intelligence’ was a grifter” was too long and clunky for a post title. Or even for a sentence really.

Or maybe I should say that what we currently call “Artificial Intelligence” is not intelligence, probably never will be, and should more accurately be called “Digital Automation.”

That would be more accurate. And to be honest, I would have no problem with people carrying on with (some of) the work they are currently doing in AI if it was more accurately labeled. Have you ever noticed that so many of the people hyping the glorious future of AI are people who’s jobs depend on AI work receiving continued funding?

Anyways, I see so many people saying “AI is here, we might as well use it. Its not really different than using a calculator, spell check, or Grammerly!” that I wanted to respond to these problematic statements.

First of all, saying that since the “tech is here, we might as well use it” is a weird idea, especially coming from so many critical educators. We have the technology to quickly and easily kill people as well (not just with guns, but drones and other advances). But we don’t just say “hey, guess we have to shoot people because the technology is here!” We usually evaluate the ethics of technology to see what should be done with it, and I keep scratching my head as to why so many people give AI a pass on the core critical questions.

Take, for instance, the The College Essay is Dead article published in the Atlantic. So many people breathlessly share that article as proof that the AI is here. The only problem is – the paragraph on “Learning Styles” in the beginning of the article (the one that the author would give a B+) is completely incorrect about learning styles. It gets everything wrong. If you were really grading it, it should get a zero. Hardly anyone talks about how learning styles are “shaped.” Learning Styles are most often seen as something people magically have, without much thought given to where they came from. When is the last time you saw an article on learning styles that mentioned anything about “the interactions among learning styles and environmental and personal factors,” or anyone at all referring to “the kinds of learning we experience”? What? Kinds of learning we experience?

The AI in this case had no clue what learning styles were, so it mixed what it could find from “learning” and the world of (fashion) “style” and then made sure the grammar and structure followed all the rules it was programmed with. Hardly anyone is noticing that the cognition is still non-existence because they are being wowed by flawless grammar.

Those of us (and there is a growing number of teachers noticing this and talking about it) that are getting AI-generated submissions in our classes are noticing this. We get submissions with perfect grammar and spelling – that completely fail the assignment. Grades are plummeting because students are believing the hype and turning to AI to relieve hectic lives/schedules/etc – and finding out the hard way the that the thought leaders are lying to them.

(For me, I have made it a habit of interacting with my students and turning assignments into conversations, so I have always asked students to re-work assignments that completely miss the instructions. Its just now more and more I am asking the students with no grammar or spelling mistakes to start over. Every time, unfortunately. And I am talking to more and more teachers all over the place that say the exact same thing.)

But anyways – back to the concerning arguments about AI. Some say it is a tool just like Grammerly or spellcheck. Well, on the technical side it definitely isn’t, but I believe they are referring to the usage side… which is also incorrect. You see, Grammarly and spellcheck replaced functions that no one ever really cared if they were outsourced – you could always ask a friend or roommate or who ever “how do you spell this word?” or “hey, can you proofread this paragraph and tell me if the grammar makes sense” and no one cared. That was considered part of the writing process. Asking the same person to write the essay for you? That has always been considered unethical in many cases.

And that is the weird part to me – as much as I have railed against the way we look at cheating here, the main idea behind cheating being wrong was that people wanted students to do the work themselves. I would still agree with that part – its not really “learning” if you just pay some one else to do it. Yet no one has really advanced a “list of ten ways cool profs will use AI in education” list that accounts for the fact that learners will be outsourcing part of the learning process to machines – and decades of research shows that skipping those parts is not good for actually learning things.

This is part of the problem with AI that the hype skips over – Using AI in education shortcuts the process of learning about thinking, writing, art creation, etc. to get to the end product quicker. Even when you use AI as part of the process to generate ideas or create rough drafts, it still removes important parts of the learning process – steps we know from research are important.

But again, educational thought leaders rarely listen to instructional designers or learning scientists. Just gotta get the retweets and grants.

At some point, the AI hype is going to smash head first into the wall of the teachers that hate cheating, where swapping out a contract cheating service for an AI bot is not going to make a difference to their beliefs. It was never about who or what provided the answers, but the fact that it wasn’t the student themselves that did it. Yes, that statement says a lot about Higher Ed. But also, it means your “AI is here, might as well accept it” hot take will find few allies in that arena.

But back (again) to the AI hype. The thought that AI adoption is like calculator adoption is a bit more complicated one, but still not very accurate. First of all, that calculator debate is still not settled for some. And second of all, it was never “free-for-all calculator usage!” People realized that once you become an adult, few people care who does the math as long as it gets done correctly. You can just ask anyone working with you something like “hey – what was the total budget for the lunch session?” and as long you got the correct number, no one cares if it came from you or a calculator or asking your co-worker to add it up. But people still need to learn how math works so they can know they are getting the correct answer (and using a calculator still requires you to know basic math in order to input the numbers correctly, unlike AI that claims to do that thinking for you). So students start off without calculators to learn the basics, and then calculators are gradually worked in as learners move on to more complex topics.

However, if you are asked to write a newsletter for your company – would people be okay with the AI writing it? The answer is probably more complicated, but in many cases they might not care as long as it is accurate (they might want a human editor to check it). But if you are an architect that had to submit a written proposal about how your ideas would be the best solution to a client’s problem, I don’t think that client would want an AI doing the proposal writing work for the architect. There are many times in life where your writing is about proving you know something, and no one would accept that as an AI generated action.

Yes, of course there is a lot that can be said about instructors that create bad assignments and how education has relied too much on bad essays to prove learning when they don’t do that, but sorry AI-hypsters: you don’t get credit for just now figuring that one out when many, many of us in the ID, LS, and other related fields have been trying to raise the alarm on that for decades. You are right, but we see how (most of) you are just now taking that one seriously… just because it strengthens your hype.

Many people are also missing what tools like ChatGPT are designed to do as well. I see so many lists of “how to use ChatGPT in education” that talk about using it as a tool to help generate ideas and answers to questions. However, it is actually a tool that aims to create final answers, essays, paintings, etc. We have had tools to generate ideas and look for answers for decades now – ChatGPT just spits out what those tools have always done when asked (but not always that accurately, as many have noted). The goal of many of the people that are creating these AI tools is to replace humans for certain tasks – not just give ideas. AI has been failing at that for decades, and is not really getting anywhere closer no matter how much you want to believe the “it will only get better” hype that AI companies want you to believe.

(It has to get good at first before it can start getting better FYI.)

This leads back to my first assertion – that there is no such thing as “artificial intelligence.” You see, what we call AI doesn’t have “intelligence” – not the kind that can be “trained” or “get better” – and it is becoming more apparent to many that it never will. Take this article about how AI’s True Goal May No Longer Be Intelligence for instance. It looks at how most AI research has given up on developing true Artificial General Intelligence, and wants you to forget that they never achieved it. What that means is that through the years, some things like machine learning and data analytics were re-branded as “intelligence” because AI is easier to sell to some people than those terms. But it was never intelligent, at least not in the ways that AI companies like to refer to it as.

Sorry – we never achieved true AI, and what we call AI today is mislabeled.

“As a result, the industrialization of AI is shifting the focus from intelligence to achievement” Ray writes in the article above. This is important to note according to Yann LeCun, chief AI scientist at Facebook owner Meta Properties, who “expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as ‘true’ intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense.” Does that last part sound familiar? Those promoting a coming “Age of AI” are hinting that AI can now plan things for us, like class lessons and college papers.

(And yes, I can’t help but notice that the “Coming Age of AI” hype sounds just like the “Coming Age of Second Life” and “Coming Age of Google Wave” and so on hype. Remember how those were going to disrupt Higher Ed? Remember all the hand-wringing 5-10 years ago over how institutional leaders were not calling emergency meetings about the impact of learning analytics? I do.)

But why does “true” intelligence matter? Well, because AI companies are still selling us the idea that it will have “true” intelligence – because it will “only get better.” Back to the ZDNet article: “LeCun expresses an engineer’s concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do.”

And it only gets worse: “Industrial AI professionals don’t want to ask hard questions, they merely want things to run smoothly…. Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.”

In other words, they are getting wowed by the perfect grammar and while failing to notice that there is no cognition there, like there hasn’t been for decades.

Decades? What I mean by that is, if you look at the history section of the Artificial General Intelligence article on Wikipedia, you will see that the hype over AI has been coming and going for decades. Even this article points out that we still don’t have AI: “In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to classify as a narrow AI system.” Terms like narrow AI, weak AI, industrial AI, and others were just marketing terms to expand the umbrella of AI to include things that did not have “intelligence,” but could be programmed to mimic human activities. The term “AI,” for now, is just a marketing gimmick. It started out in the 1950s and still has not made any real advances in cognition. It will only get better at fooling people into thinking it is “intelligent.”

A lot is being said now about how AI is going to make writing, painting, etc “obsolete.” The vinyl record would like to have a word with you about that, but even if you were to believe the hype… there is more to consider. Whenever people proclaim that something is “obsolete,” always ask “for who?” In the case of emerging technologies, the answer is almost always “for the rich elite.” Technological advances often only remain free or low cost as long as they need guinea pigs to complete it.

If you are pushing students to use a new technology now because it is free, then you are just hastening the day that they will no longer be able to afford to use it. Not to mention the day it will be weaponized against them. If you don’t believe me, look at the history of Ed-Tech. How many tools started off as free or low cost, got added to curriculum all over the place, and then had fees added or increased? How many Ed-Tech tools have proven to be harmful to the most vulnerable? Do you think AI is going to actually plot a different course than the tech before it?

No, you see – AI is often being created by certain people who have always been able to pay other people to do their work. It was just messy to deal with underpaid labor, so AI is looked at behind the scenes as a way to ditch the messy human part of exploitation. All of the ways you are recommending that ChatGPT be used in courses is just giving them free labor to perfect the tool (at least to their liking). Once they are done with that, do you really think it will stay free or low cost? Give me a break if you are gullible enough to believe it will.

Mark my words: at some point, if the “Age of AI” hype takes off, millions of teachers are suddenly going to have to shift lesson plans once their favorite AI tool is no longer free or is just shut down. Or tons of students that don’t have empathetic teachers might have to sell another organ to afford class AI requirements.

(You might notice how all of the “thoughtful  integration of AI in the classroom” articles are by people that would benefit greatly, directly or indirectly, from hundreds or thousands of more students using ChatGPT.)

On top of all of that – even if the hype does prove true, at this point AI is still untested and unproven. Why are so many people so quick to push an untested solution on their students? I even saw someone promoting how students could use ChatGPT for counseling services. WTF?!?!? That is just abusive to recommend students who need counseling to go to unproven technology, no matter how short staffed your Mental Health Services are.

But wait – there is more! People have also been raising concerns (rightly so) about how AI can be used for intentionally harmful purposes, like everything from faking revenge porn to stock market manipulation. I know most AI evangelists also express concern about this as well, but do they realize that every usage of AI for experimentation or course work will increase the ability of people to intentionally use AI for harm? I have played around with AI as much as anyone else, but I stopped because I realized that if you are using it, you are adding to the problem by helping to refine the algorythms. What does that mean to unleash classes full of students on to it for free labor to improve it?

To be clear here, I have always been a fan of AI generated art, music, and literature as a sub-genre itself. The weird and trippy things that AI has produced in the past are a fascinating look at what happens when you only have a fraction of the trillions of constructs and social understandings we have in our minds. But to treat it as a technology that is fully ready to unleash on our students without much critical thought about how dangerous that could be?

Also, have you noticed that almost every AI ethics panel out there is made entirely of pro-AI people and no skeptics? I am a fan of the work of some of these “pro-AI people,” but where is the true critical balance?

I get that we can’t just ignore AI and do nothing, but there is still a lot of ground between “sticking our heads in the sand” and “full embrace”. For now, like many instructors, I tell students just not to use it if they want to pass. I’m not sure I see that changing any time soon. Most of the good ideas on all of those lists and articles of “how to integrate AI into the classroom” are just basic teaching designs that most teachers think of in their sleep. Bur what damage is going to be done by institutional leaders and company owners rushing in to implement unproven technology? That may “disrupt” things in ways that we don’t want to see come to fruition.

Is AI Generated Art Really Coming for Your Job?

You might have noticed this Twitter thread about improvements in AI-generated art work. Well, if you are still on Twitter that is. Here is the thread – well, at least, until You-Know-Who “MySpaces” Twitter out of service:

So let’s take a look at this claim that AI-generated artwork is coming to disrupt people’s jobs in the very near future. First of all, yes it is really cool to be able to enter a prompt like that and get results like this. There is obviously a lot of improvement in the AI. It actually looks useful now. But saying “a less capable technology is developing faster than a stable dominant technology (human illustration)”…?

[tweet 1596581772644569090 hide_thread=’true’]

Whoa, now. Time for a reality check. AI art is just now somewhat catching up with where human art has been for hundreds of years. AI was programmed by people that had thousands of years of artistic development easily available in a textbook. So saying that it is “developing faster”? With humans being able to create photo-realistic drawings as well as illustrate any idea that comes to their mind – where is there left to “develop” in art?

That is like a new car company saying they are “developing new cars faster than the stable industry.” Or someone saying that they have blazed new technology in travel because they can cross the country faster in a car than a horse and wagon did in the past. The art field had to blaze trails for thousands of years to get where it is, and the AI versions are just basically cheating to play catch up (and it is still not there yet).

The big question is: can this technology come up with a unique, truly creative piece of artwork on its own? The answer is still “no.” And beating the Lovelace Test is not proof that the answer is “yes,” because the Lovelace test is not really a true test of creativity.

[tweet 1596680005638975489 hide_thread=’true’]

Yes, all artists stand on the shoulders of others, but there is still an element to creativity that involves blending those influences into something else that transcends being strictly derivative of existing styles. Every single example of AI artwork so far has been very derivative of specific styles of art, usually on purpose because you have to pick an existing artistic style just to get your images in the first place.

But even the example above of an “otter making pizza in Ancient Rome” is NOT a “novel, interesting way” by the standards that true artists use. I am guessing that Mollick is referring to the Lovelace 2.0 Test, which the creator of said test stated that “I didn’t want to conflate intelligence with skill: The average human can play Pictionary but can’t produce a Picasso.”

Of course, the average artist can’t produce an original painting on the level of Picasso either (unless they are just literally re-painting a Picasso, which many artists do to learn their craft). The people working on this particular AI Art Generator have basically advanced the skill of their AI to where it can pass the Lovelace 2.0 Test without really becoming truly creative. And honestly, “Draw me a picture of a man holding a penguin” is a sad measure of artistic creativity – no matter how complex you make that prompt as the test goes along.

But Mollick’s claims in this thread is just an example of people not understanding the field that they say is going to be disrupted. For example, marveling over correct lighting and composition? We have had illustration software that could do this correctly for decades.

[tweet 1596621251627335680 hide_thread=’true’]

Artists will tell you that in a real world situations, the time consuming part of creating illustrations is figuring out what the human that wants to art… actually wants. “The otter just looks wrong – make it look right!” is VERY common feedback. The client probably also created several specific details about the otter, plane, positions of things, etc that has to be present in any artwork they want. Then there are all of the things they had in their head that they didn’t write down. Pulling those details that out of clients is what professional artists are trained to do.

This is where AI in art, education, etc always falls apart: programmers always have to start with the assumption that people actually know what they want out of the AI generator in the first place. The clients that professionals work with rarely ever want something as simple as “otter on a plane using wifi.” The reality is that they rarely even have that specific or defined idea of what they want in the beginning. There is a difficult skill of learning to figure out what people actually want that the experts in AGI/strong AI/etc tell us is probably never going to be possible from AI.

So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.

The Problem of Learning Analytics and AI: Empowering or Resistance in the Age of “AI”

So where to begin with this series I started on Learning Analytics and AI? The first post started with a basic and over-simplified view of the very basics. I guess the most logical place to jump to is… the leading edge of the AI hype? Well, not really… but there is an event in that area happening this week, so I need to go there anyways.

I was a bit surprised that the first post got some attention – thank you to those that read it. Since getting booted out of academia, I have been unsure of my place in the world of education. I haven’t really said much publicly or privately, but it has been a real struggle to break free from the toxic elements of academia and figure out who I am outside of that context. I was obviously surrounded by people that weren’t toxic, and I still adjunct at a university that I feel supports its faculty… but there were still other systemic elements that affect all of us that are hard to process once you are gone.

So, anyway, I just wasn’t sure if I could still write anything that made a decent point, and I wasn’t too confident I did that great of a job writing about such a complex topic in a (relatively) short blog post last time. Maybe I didn’t, but even a (potentially weak) post on the subject seems to resonate with some. Like I said in the last post, I am not the first to bring any of this up. In fact, if you know of any article or post that makes a better point than I do, please feel free to add it in the comments.

So, to the topic at hand: this week’s Empowering Learners in the Age of AI conference in Australia. My concern with this conference is not with who is there – it seems to be a great group of very knowledgeable people. I don’t know some of them, but many are big names in the field that know their stuff. What sticks out to me is who is not there, as well as how AI is being framed in the brief descriptions we get. But neither of those points is specific to this conference. In fact, I am not really looking at the conference as much as some parts of the field of AI, with the conference just serving as proof that the things I am looking at are out there.

So first of all, to address the name of the conference. I know that “empowering learners” is a common thing to say not just in AI, but education in general. But it is also a very controversial and problematic concept as well. This is one concern that I hang on all of education and even myself as I like the term “empower” as well. No matter what my intentions (or anyone else’s), the term still places the institution and the faculty as the center of the power in the learning process – there to decide whether the learners get to be empowered or not. One of the best posts on this topic is by Maha Bali: The Other Side of Student Empowerment in a Digital World. At the end of the post, she gets to some questions that I want to ask of the AI field, including these key ones:

“In what ways might it reproduce inequality? How participatory has the process been? How much have actual teachers and learners, especially minorities, on the ground been involved in or consulted on the design, implementation, and assessment of these tools and pedagogies?”

I’ll circle back to those throughout the post.

Additionally, I think we should all question the “Age of AI” and “AI Society” part. It is kind of complicated to get into what AI is and isn’t, but the most likely form of AI we will see emerge first is what is commonly called “Artificial General Intelligence” (AGI), which a is deceptive way of saying “pretending to act like humans but not really be intelligent like we are.” AGI is really a focus on creating something that “does” the same tasks humans can, which is not what most people would attribute to an “Age of AI” or “AI Society.” This article on Forbes looks at what this means, and how experts are predicting that we are 10-40 years away from AGI.

Just as an FYI, I remember reading in the 1990s that we were 20-40 years away from AGI then as well.

So we aren’t near an Age of AI, probably not in many of our lifetimes, and even the expert options may not end up being true. The Forbes articles fails to mention that there were many problems with the work that claimed to be able to determine sexuality from images. In fact, there is a lot to be said about differentiating AI from BS that rarely gets brought up by the AI researchers themselves. Tristan Greene best sums it up in his article about “How to tell the difference between AI and BS“:

“Where we find AI that isn’t BS, almost always, is when it’s performing a task that is so boring that, despite there being value in that task, it would be a waste of time for a human to do it.”

I think it would have been more accurate to say you are “bracing learners for the age of algorithms” than empowering for an age of AI (that is at least decades off but may never actually happen according to some). But that is me, and I know there are those that disagree. So I can’t blame people for being hopeful that something will happen in their own field sooner than it might in reality..

Still, the most concerning thing about the field of AI is who is not there in the conversations, and the Empowering Learners conference follows the field – at least from what I can see on their website. First of all, where are the learners? Is it really empowering for learners when you can’t really find them on the schedule or in the list of speakers and panelists? Why is their voice not up front and center?

Even bigger than that is the problem that has been highlighted this week – but one that has been there all along:

The specific groups she is referring to are BIPOC, LGBTQA, and Disabilities. We know that AI has discrimination coded into it. Any conference that wants to examine “empowerment” will have to make justice front and center because of long existing inequalities in the larger field. Of course, we know that different people have different views of justice, but “empowerment” would also mean each person that faces discrimination gets to determine what that means. Its really not fair to hold a single conference accountable for issues that long existed before the conference did, but by using the term “empowerment” you are setting yourself up to a pretty big standard.

And yes, “empowerment” is in quotes because it is a problematic concept here, but it is the term the field of AI and really a lot of the world of education uses. The conference web page does ask “who needs empowering, why, and to do what?” But do they mean inequality? And if so, why not say it? There are hardly any more mentions of this question after it is brought up, much less anything connecting the question to inequality, in most of the rest of the program. Maybe it will be covered in conference – it is just not very prominent at all as the schedule stands. I will give them the benefit of the doubt until after the conference happens, but if they do ask the harder questions, then they should have highlighted that more on the website.

So in light of the lack of direct reference to equity and justice, the concept of “empowerment” feels like it is taking on the role of “equality” in those diagrams that compare “equality” with “equity” and “justice”:

Equality vs equity vs justice diagram
(This adaption of the original Interaction Institute for Social Change image by Angus Macguire was found on the Agents of Good website. Thank you Alan Levine for helping me find the attribution.)

If you aren’t going to ask who is facing inequalities (and I say this looking at the fields of AI, Learning Analytics, Instructional Design, Education, all of us), then you are just handing out empowerment the same to all. Just asking “who needs empowering, why, and to do what?” doesn’t get to critically examining inequality.

In fact, the assumption is being made by so many people in education that you have no choice but to utilize AI. One of the best responses to the “Equality vs Equity vs Justice” diagrams has come from Bali and others: what if the kids don’t want to play soccer (or eat an apple or catch a fish or whatever else is on the other side of the fence in various versions)?

Resistance is a necessary aspect of equity and justice. To me, you are not “empowering learners” unless you are teaching them how to resist AI itself first and foremost. But resistance should be taught to all learners – even those that “feel they are safe” from AI. This is because 1) they need to stand in solidarity with those that are the most vulnerable, to make sure the message is received, and 2) they aren’t as safe as they think.

There are many risks in AI, but are we really taking the discrimination seriously? In the linked article, Princeton computer science professor Olga Russakovsky said

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities. We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

Additionally, (now former) Google researcher Timnit Gebru said that scientists like herself are

“some of the most dangerous people in the world, because we have this illusion of objectivity.”

Looking through the Empowering Learner event, I don’t see that many Black and Aboriginal voices represented. There are some People of Color, but not near enough considering they would be the ones most affected by discrimination that would impede any true “empowerment.” And where are the experts on harm caused by these tools, like Safiya Noble, Chris Gilliard, and many others? The event seems weighted towards those voices that would mostly praise AI, and it is a very heavily white set of voices as well. This is the way many conferences are, including those looking at education in general.

Also, considering that this is in Australia, where are the Aboriginal voices? Its hard to tell on the schedule itself. I did see on Twitter that the conference will start with an Aboriginal perspective. But when is that? In the 15 minute introductory session? That is no where near enough time for that. Maybe they are elsewhere on the schedule and just not noted well enough to tell. But why not make that a prominent part of the event rather than part of a 15 minute intro (if that is what it is)?

There are some other things I want to comment on about the future of AI in general:

  • The field of AI is constantly making references to how AI is affecting and improving areas such as medicine. I would refer you back to the “How to tell the difference between AI and BS” article for much  of that. But something that worries me about the entire AI field talking this way is that the are attributing “artificial intelligence” to things that boil down to advanced pattern recognition mainly using human intelligence. Let’s take, for example, recognizing tumors in scans. Humans program the AI to recognize patterns in images that look like tumors. Everything that the AI knows to look for comes directly from human intelligence. Just because you can then get the algorithm to repeat what the humans programmed it to thousands of times per hour, that doesn’t make it intelligence. It is human intelligence pattern recognition that has been digitized, automated, and repeated rapidly. This is generally what is happening with AI in education, defense, healthcare, etc.
  • Many leaders in education in general like to say that “institutions are ill-prepared for AI” – but how about how ill-prepared AI is for the equity and reality?
  • There is also often talk in the AI community about building trust between humans and machines that we see examples of at the conference as well: “can AI truly become a teammate in group learning or a co-author of a ground-breaking scientific discovery?” I don’t know what the speaker plans to say, but the answer is no. No we shouldn’t build trust and no we shouldn’t anthropomorphize AI. We should always be questioning it. But we also need to be clear, again, that AI is not the one that is writing (or creating music or paintings). This is the weirdest area of AI – they feed a bunch of artistic or music or literary patterns into AI, tell it how to assemble the patterns, and when something comes out it is attributed to AI rather than the human intelligence that put it all together. Again, the machine being able to repeat and even refine what the human put there in the first place is not the machine creating it. Take, for example, these different AI generated music websites. People always send these to me and say “look how well the machine put together ambient or grindcore music or whatever.” Then I  listen… and it is a mess. They take grindcore music and chop it up in to bits and then run those bits through pattern recognition and spit out this random mix – that generally doesn’t sound like very good grindcore. Ambient music works the best to uninitiated ears, but to fans of the music it still doesn’t work that great.
  • I should also point out about the conference that there is a session on the second day that asks “Who are these built for? Who benefits? Who has the control?” and then mentions “data responsibility, privacy, duty of care for learners” – which is a good starting point. Hopefully the session will address equity, justice, and resistance specifically. The session, like much of the field of AI, rests on the assumption that AI is coming and there is nothing you can do to resist it. Yes the algorithms are here, and it is hard to resist – but you still can. Besides, experts are still saying 10-40 years for the really boring stuff to emerge as I examined above.
  • I also hope the conference will discuss the meltdown that is happening in AI-driven proctoring surveillance software.
  • I haven’t gotten much into surveillance yet, but yes all of this relies on surveillance to work. See the first post. Watch the Against Surveillance Teach-In Recording.
  • I was about to hit publish on this when I saw an article about a Deepfake AI Santa that you can make say whatever you want. The article says “It’s not nearly as disturbing as you might think”… but yes, it is. Again, people saying something made by AI is good and realistic when it is not. The Santa moves and talks like a robot with zero emotion. Here again, they used footage of a human actor and human voice samples and the “AI” is an algorithm that chops it up into the parts that makes your custom message. How could this possibly be misused?
  • One of the areas of AI that many in the field like to hype are “conversational agents” aka chatbots. I want to address that as well since that is an area that I have (tried) to research. The problem with researching agents/bots is that learners just don’t seem to be impressed with them – it’s just another thing to them. But I really want to question how these count as AI after having created some myself. The process for making a chatbot is that you first organize a body of information into chunks of answers or statements that you want to send as responses. You then start “training” the AI to connect what users type into the agent (aka “bot”) with specific statements or chunks of information. The AI makes a connection and sends the statement or information or next question or video or whatever it may be back to the user. But the problem is, the “training” is you guessing dozens of ways that the person might ask a question or make a statement (including typos or misunderstandings) that matches with the chunk of information you want to send back. You literally do a lot of the work for the AI by telling it all the ways someone might type something into the agent that matches each chunk of content. They want at least 20 or more. What this means is that most of the time, when you are using a chatbot, it gives you the right answer because you typed in one of the most likely questions that a human guessed and added to the “training” session. In the rare cases where some types something a human didn’t guess, then the Natural Language Processing kicks in to try and guess the best match. But even then it could be a percentage of similar words more than “intelligence.” So, again, it is human intelligence that is automated and re-used thousands of times a minute – not something artificial that has a form of intelligence. Now, this might be useful in a scenario when you have a large body of information (like an FAQ bot for the course syllabus) that could use something better than a search function. Or maybe a branching scenarios lesson. But it takes time to create a good chatbot. There is still a lot of work and skill to creating the questions and responses well. But to use chatbots for a class of 30, 50, 100? You probably will spend so much time making it that it would be easier to just talk to your students.
  • Finally, please know that I realize that what I am talking about still requires a lot of work and intelligence to create. I’m not doubting the abilities of the engineers and researchers and others that put their time into developing AI. I’m trying to get at the pervasive idea that we are in an Age of AI that can’t be avoided. Its a pervasive idea that was even made in a documentary web series a year ago. I also question whether “artificial intelligence” is the right term for all of this, rather than something more accurate like “automation algorithms.”

Again, everything I touch on here is not as much about this conference, as it is about the field of AI since this conference is really just a lot of what is in the AI field concentrated into two days and one website. The speakers and organizers might have already planned to address everything I brought up here a long time ago, and they just didn’t get it all on the website. We will see – there are some sessions with no description and just a bio. But still, at the core of my point, I think that educators need to take a different approach to AI than we have so far (maybe by not calling it that when it rarely is anything near intelligent) by taking justice issues seriously. If the machine is harming some learners more than others, the first step is to teach resistance, and to be successful in that all learners and educators need to join in the resistance.