For me, one of the more informative ways to track a trend in education once it gets wider societal notice is by following people outside of education. As predicted, people are starting to complain about how low the quality of the the output from ChatGPT and other generative AI is. Students that use AI are seeing their grades consistently lower than peers that don’t (“AI gives you a solid C- if you just use it, maybe a B+ if you become a prompt master”). Snarky jokes about “remember when AI was a thing for 5 seconds” abound. These things are not absolute evidence, but usually tend to indicate that a lot of dissatisfaction exists under the surface – where the life or death of a trend is actually decided.
Of course, since this is education, AI will stick around and dominate conversations for a good 5 more years… at least. As much as many people like to complain about education (especially Higher Ed) being so many years behind the curve, those that rely on big grant funding to keep their work going actually depend on that reluctance to change. Its what keeps the lights on well after a trend has peaked.
As I looked at in a recent post, once a trend has peaked, there are generally three paths that said trend will possibly take as it descends. Technically, it is almost impossible to tell if a trend has peaked until after it clearly starts down one of those paths. Some trends get a little quiet like AI is now, and then just hit a new stride once a new aspect emerges from the lull. But many trends also decline after the exact lull that AI is in now. And – to me at least, I could be wrong – the signs seems to point to a descent (see the first paragraph above).
While it is very, very unlikely that AI will follow the path of Google Wave and just die a sudden weird death… I still say it is very slight possibility. To those that say it is impossible – I want to talk a minute about how it could possibly happen, even if still unlikely.
One of the problematic aspects of AI that is finally getting some attention (even though many of us have been pointing it out all along) is that most services like ChatGPT were illegally built on intellectual property without consent or compensation. Courts and legal experts seem to be taking a “well, whatever” attitude towards this (with a few obvious exceptions), usually based on a huge misunderstanding of what AI really is. Many techbros argue this misunderstanding as well – that AI actually is a form of “intelligence” and therefore should not be held to copyright standards that other software has to abide by when utilizing intellectual property.
AI is a computer program. It is not a form of intelligence. Techbros arguing that it is are just trying to save their own companies from lawsuits. When a human being reads Lord of the Rings, they may write a book that is very influenced by their reading, but there will still be major differences because human memory is not perfect. There are still limitations – although not precise – on when an influence becomes plagiarism. It is the imperfection of memory in most human brains that makes it possible to have influences considered “legal” under the law, as well as moral in most people’s eyes.
While AI relies on “memory” – its not a memory like humans have. It is precise and exact. When AI constructs a Lord of Rings style book, it has a perfect, exact representation of every single word of those books to make derivative copies from. Everything created by AI should be considered plagiarism because AI software has a perfect “memory.” There is nothing about AI that constitutes as an imperfect influence. And most people that actually program AI will tell you something similar.
Now that creative types are noticing their work being plagiarized in AI without their permission or compensation, lawsuits are starting. And so far the courts are often getting it wrong. This is where we have to wade into the murky waters of the difference between what is legal and what is moral. All AI built on intellectual property without permission or compensation is plagiarism. The law and courts will probably land on the side of techbros and make AI plagiarism legal. But that still won’t change the fact that almost all AI is immorally created. Take away from that what you will.
HOWEVER… there is still a slight possibility that the courts could side with content creators. Certainly if Disney gets wind that their stuff was used to “train” AI (“train” is another problematic term). IF courts decide that content owners are due either compensation or a say in how their intellectual property is used (which, again is the way it should be), then there are a few things that could happen.
Of course, the first possibility is that AI companies could be required to pay content owners. It is possible that some kind of Spotify screwage could happen, and content owners get a fraction of a penny every time something of theirs is used. I doubt people will be duped by that once again, so the amount of compensation could be significant. Multiply that by millions of content owners, and most AI companies would just fold overnight. All of their income would go mostly towards content owners. Those relying on grants and investments would see those gone fast – no one would want to invest in company that owes so much to so many people.
However, not everyone will want money for their stuff. They will want it pulled out. And there is also a very real possibility that techbros will somehow successfully argue about the destructive effect of royalties or whatever. So another option would be to go down the road of mandated permissions to use any content to “train” AI algorithms. While there is a lot of good open source content out there, and many people that will let companies use their content for “training”… most AI companies would be left with a fraction of the data they used to originally train their software. If any. I don’t see many companies being able to survive being forced to re-start from scratch with 1-2% of the original data that they originally had. Because to actually remove data from generative AI, you can’t just delete it going forward. You have to go back to nothing and completely remove the data you can’t use anymore and re-do the whole “training” from the beginning.
Again, I must emphasize that I think these scenarios are very unlikely. Courts will probably find a way to keep AI companies moving forward with little to no compensation for content owners. This will be the wrong decision morally, but most will just hang their hat on the fact that it is legal and not think twice about who is being abused. Just like they currently do with the sexism, racism, ableism, homophobia, transphobia, etc that is known to exist within AI responses. I’m just pointing out how AI could feasibly disappear almost overnight, even if it is highly unlikely.
Matt is currently an Instructional Designer II at Orbis Education and a Part-Time Instructor at the University of Texas Rio Grande Valley. Previously he worked as a Learning Innovation Researcher with the UT Arlington LINK Research Lab. His work focuses on learning theory, Heutagogy, and learner agency. Matt holds a Ph.D. in Learning Technologies from the University of North Texas, a Master of Education in Educational Technology from UT Brownsville, and a Bachelors of Science in Education from Baylor University. His research interests include instructional design, learning pathways, sociocultural theory, heutagogy, virtual reality, and open networked learning. He has a background in instructional design and teaching at both the secondary and university levels and has been an active blogger and conference presenter. He also enjoys networking and collaborative efforts involving faculty, students, administration, and anyone involved in the education process.